SFS Performance Management Part I

February 1997 - Version 6.1
Bill Bitner
IBM Corp.
1701 North St.
Endicott, NY 13760
(607) 752-6022
Internet: bitner@vnet.ibm.com
IBMMail: USIB1E29 at IBMMAIL

(c) Copyright IBM Corporation 1991, 1997 - All Rights Reserved


Table of Contents

Disclaimer
Trademarks
Acknowledgements
Overview
SFS Concepts
SFS Structure - Server Data
SFS Performance Management
Preventative Tuning
CP Tuning Considerations
CMS Tuning Considerations
DASD Placement
VM Data Spaces
Recovery
Multiple File Pools
Monitoring Performance
Solving Performance Problems
Confirm and Isolate the problem
Take Corrective Action
Evaluate for effectiveness
Case Study - VMPRF Report (PRF006)
Case Study - VMPRF Report (PRF083)
Case Study - Use of S and C Table
Case Study - VMPRF Report (PRF008)
Case Study - VMPRF Report (PRF006)
Case Study - VMPRF Report (PRF083)
Some Application Performance Tips
Understanding your application's performance
Summary
References
Acronyms


Disclaimer

The information contained in this document has not been submitted to any formal IBM test and is distributed on an "As is" basis without any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environment do so at their own risk.

In this document, any references made to an IBM licensed program are not intended to state or imply that only IBM's licensed program may be used; any functionally equivalent program may be used instead.

Any performance data contained in this document was determined in a controlled environment and, therefore, the results which may be obtained in other operating environments may vary significantly.

Users of this document should verify the applicable data for their specific environment.

It is possible that this material may contain references to, or information about, IBM products (machines and programs), programming, or services that are not announced in your country. Such references or information must not be construed to mean that IBM intends to announce such IBM products, programming, or services in your country.

Should the speaker get too silly, IBM will deny existence or responsibility for the speaker.

Speaker Notes

This presentation has been made over a dozen times since 1991. Except for a few changes, the bulk of the content is the same. One should not read that as a sign of lack of commitment to SFS, but as a sign of performance management being a priority since day one.

Last updated February 3, 1997 (Version 6.1)


Trademarks

The following are trademarks of the IBM Corporation

  • Virtual Machine/Enterprise Systems Architecture

  • Virtual Machine/Extended Architecture

  • VM/ESA

  • VM/XA

  • VTAM

Acknowledgements

My thanks to various folks for helping pull this material together.

  • Charlie Bradley

  • Wes Ernsberger

  • Sue Farrell

  • Butch Terry (last but not least)

Speaker Notes

The speaker notes were never written with the intent of including them in handouts. So if you are reading this, please keep in mind that I never took the time to do a quality job with the speaker notes. Please excuse grammar and typos. However, any suggestions or corrections are appreciated.


Overview

  • SFS Structure
  • SFS Performance Management
    • Preventing performance problems
    • Monitoring performance
    • Solving performance problems
  • Case Study
  • Some application tips

Speaker Notes

This presentation is geared towards VM/ESA ESA, not the 370 feature or VM/SP or VM/HPO (however some things do apply).

This presentation will cover the tasks related to the performance of SFS file pool servers. The presentation is meant to take the mystery out of this. This information is in the VM/ESA 1.1.1 Chapter 20 of the CMS Planning and Administration manual. After VM/ESA 1.2.0, much of this material was moved to the VM/ESA Performance manual. In addition, some performance tips to consider for applications utilizing SFS data.

In the VM library in VM/ESA 1.2.0 all performance data was consolidated into a single manual "VM/ESA Performance". This contains material relative to this presentation

SFS Structure - The presentation is really meant for those that know and understand at least the basics of SFS. However, since some folks attend out of curiosity, a few foils are provided to cover the basics and structure of SFS.

SFS Performance Managment - most of the time will be spent on this topic.


SFS Concepts

  • Coexists with the minidisk file system

  • Components in

    • CMS End User

    • SFS Server Virtual Machine
End User Server +----------+ +----------+ | | | | | CMS | | CMS | | | | | +----------------------------------------+ | CP | +----------------------------------------+

Speaker Notes

The presentation is really meant for those that know and understand at least the basics of SFS. However, since a lot of folks attend out of curiosity, a few foils are provided to cover the basics and structure of SFS.

SFS coexists with the current minidisk (EDF) file system. For our purposes SFS is made up of two chunks: stuff in end user virtual machine (CMS nuc + CSL) and stuff in server virtual machine. Important to note that communication is performed via APPC/VM with private protocol.

The figure represents SFS (without data space exploitation)

  • when a user writes to a file, CMS in the user virtual machine sends the data to be written to the server virtual machine
  • the server virtual machine writes the data to the file in the file pool
  • for a user to have file space in the file pool, must be ENROLLED
  • communication between the user and server is performed via APPC/VM with private protocol

SFS Structure - Server Data

  • Control Data

    • POOLDEF file - Server A-disk

    • File Pool Control Minidisk

    • Catalog Storage Group - Storage Group 1

  • Log Data

    • Log Minidisk 1

    • Log Minidisk 2

  • User Data

    • Storage Group 2 Minidisks

      ...

    • Storage Group 'n' Minidisks

Speaker Notes

To level set on terminology we split up the SFS Server structure into 3 parts:

  • Control data is the management part of SFS. The Pooldef file describes the config/allocation of minidisks for various uses. The control minidisk is used to map out other disks used for real work. Storage Group 1 holds the catalog information. I'll try to refer to this as catalog so as not to confuse with other storage groups.

  • Two log disks are provided to mirror each other for RAS reasons. Related to info about LUWs.

  • User Data is the actual file data blocks (stuff inside file). The numbers start at 2 and go to "n".

SFS Performance Management

  • Preventing performance problems

  • Monitoring performance problems

  • Solving performance problems

Speaker Notes

This presentation is broken down into three pieces. I often use the lawn mower analogy. It is best to read the instructions when putting it together. Periodically check the fluids and replace spark plugs as necessary. When it is performing poorly, check various items and make adjustments as necessary.


Preventative Tuning

  • CP tuning considerations

  • CMS tuning considerations

  • DASD placement

  • VM Data Spaces

  • Recovery

  • Multiple file pools

Speaker Notes

The first task, preventing problems, we refer to as preventative tuning, involves a list of performance guidelines when you are defining a new file pool or modifying an existing one. The ones I'll discuss are:

If these guidelines are followed, you usually don't have any SFS performance problems.


CP Tuning Considerations

  • OPTION QUICKDSP

  • SHARE REL 1500

  • Minidisk caching

    • Make logs ineligible (directory MINIOPT NOMDC)

    • Control minidisk not eligible

    • Other server minidisks benefit greatly

    • Directory OPTION NOMDCFS statement to avoid limit on MDC insertions

  • CP SET RESERVED

Speaker Notes

setting quickdsp on ensures that the server will not have to wait in the eligible list for system resources to become available. For more info, refer to the CP Command and Utilitity Reference manual.

set share rel will place the server machine in a more favorable position in the dispatch queue. why 1500?...the default setting for user is 100. The server supports multiple users such as 15 so we recommend 1500. This should be set inline with other server settings such as VTAM.

  • Minidisk caching
    • The logs will not benefit from MDC because I/O activity is almost entirely writes.
    • Having a blocksize of 512 bytes, the control minidisk is not eligible. Even if it was eligible it would not benefit due to high write activity.
    • The rest of the server minidisks are eligible and can be quite beneficial.
    • The NOMDCFS option is for No MDC Fair Share limiting. Overrules CP MDC processing that restricts updates for any given virtual machine. After all, SFS server is doing I/O on behalf of others.
  • SET RESERVED establishes the number of pages the virtual machine is entitled to have resident in real storage at all times. Use when server is serial page faulting. State that all SFS page faults are serial, and describe serial page faults (multiple users waiting for server).

CMS Tuning Considerations

  • Choose USERS parm value carefully

    • USERS is an SFS startup parameter

    • best estimate of # of users at peak activity

    • server optimizes its processing based on value

    • Better to over-estimate than under-estimate

  • CMS SFS file cache

    • controls read ahead and write behind

    • defaults to 20K for SFS files (12K prior to VM/ESA 1.2.0)

    • if high paging rate, consider lowering

    • if low paging rate, performance benefit to increase

    • controlled by BUFFSIZE parm in DEFNUC macro

    • Max is 96K in VM/ESA 1.1.0 and above.

  • CRR Recovery Server

    • should have one or performance degrades significantly

  • Saved segments

    • CMSVMLIB on user side (includes SFS code)

    • CMSFILES on server side (SFS and CRR code)

Speaker Notes

USERS tells the server how much work it should configure itself to handle. If specified too large, may experience serial page faults problem in server and/or increased checkpoint duration (long blip). If specified too small, the server will not configure enough agents (tasking objects) to handle the incoming requests. This can cause an undesirable queueing effect. It is better to overestimate a little.

The CMS cache controls the amount of read ahead and write behind. The cache is specified for all users in their nonshared virtual storage. Some measurements indicate a value larger than 12k would benefit most environments. This cache is for the SFS file I/O and should not be confused with minidisk cache for read ahead, write behind.

To change CMS file cache size, update BUFFSIZE parm in DEFNUC macro; assemble DMSNGP ASSEMBLE; rebuild CMS nucleus. Refer to Service Guide and CP Planning and Administration manuals for more details. Allowable range is 1 to 28K (96K in VM/ESA 1.1.0 and above).

A CRR recovery server should be active or a significant degradation in performance will be experienced (40% increase). Use QUERY FILEPOOL STATUS recovery: to determine if users connected.


DASD Placement

  • log placement considerations

    • place on separate channels and control units

    • place on volumes with little I/O activity

    • place on 3990-3 DASD Fast Write (VM/ESA 1.1.1)

  • catalog storage group (SG 1)

    • spread across volumes to distribute I/O

    • place minidisks adjacent to other small frequently referenced areas to minimize seek time

  • guidelines for spreading a storage group

    • spread across volumes to distribute I/O

    • non-SFS space is low or uniform usage

    • same amount of space on each volume

    • volumes should have similar performance characteristics

  • For placement tips related to availability see the CMS Planning and Administration Guide. Note that some involve trade-offs with performance.

Speaker Notes

  • Log placement - This maximizes the likelihood that the server can do I/O to logs in parallel thus reducing response time; this will minimize seektime again helping response time; DASD F/W will reduce commit time processing which will improve resource contention.

  • Storage Group 1 - A sizable fraction of all I/Os are to SG1, therefore it may be necessary to spread to prevent any one volume from becoming a bottleneck. A SG1 mdisk is relatively small, I/O intensive area, so this will minimize seek time.

  • Data Storage Group - When a storage group spans volumes, the server allocates space evenly across those volumes. This tends to spread the I/O demand across those volumes.

VM Data Spaces

  • Concept SFS Directory End User +------------+ Server +----------+ | Data Space | +----------+ | |--| |--| | | CMS | +------------+ | CMS | | | | | +----------------------------------------+ | CP | +----------------------------------------+

  • Usage considerations

    • Most benefit from highly used shared R/O or read-mostly data

    • Group updates to minimize multiple versions

    • Users should run in XC mode for most benefit

    • Separate R/O from R/W directories in different filepools.

Performance advantages

  • Relative to SFS without data spaces

    • CMS retrieves data from shared virtual storage (more efficient than server reading from DASD for each user)

    • Communication overhead with server eliminated

    • XC mode users:

      • get data directly from data space

      • FSTs in data space (shared). This can help:
        • reduce real storage requirements
        • CMS initialization
    • 370 and XA mode users:
      • get data from data space by asking CP
      • FSTs in user storage (not shared)
  • Relative to minidisk

    • Performance similar to minidisk with minidisk caching

    • Shared FSTs (without manual management)

Speaker Notes

The server (logically) puts directory in VM data space, and user virtual machine takes from VM data space.

The benefit of data spaces is based on degree of sharing. They provide a great benefit in user virtual storage as the FSTs are shared among accessed users and I/Os as the data is moved from the data space without a trip to the server.

Grouping updates will minimize the likelihood of having multiple versions in data spaces. (discuss ACCESS to RELEASE consistency here). Having users run in XC mode is how the previously stated benefits are achieved.

Separate servers for 1) less scheduled down time for R/O and 2) multiple user rules (discussed later) do not apply.

The benefit of data spaces is based on the degree of sharing. Not only will exploitation of VM data spaces minimize expensive server requests, but it will allow a single copy of data to be shared among several users. This can be a significant boost for storage constrained systems.

Performance is similar compared to read-mostly minidisks in minidisk cache. There are measurements that show both ends of the spectrum. It is dependent on workload and storage constraint.


Recovery

  • to minimize time to restore control data:

    • keep file pool from growing too large (number of files, directories, aliases, etc.)

    • do more frequent backups

    • do backups to another file pool

    • specify large CATBUFFERS

  • to minimize time to restore user data:

    • limit storage group size to meet recovery time requirements

    • specify large CATBUFFERS

Speaker Notes

The following suggestions should minimize the amount of time required to restore the control data of a file pool. "too large" refers to number of objects (files, alias, directories, etc) and is relative to restore rate. Some measurements showed - restore rate = 22Mb/min or 49000 objects/min ; redo rate = 5.3 log blocks/min. The less file pool change activity since the last backup, the less time it will take to apply. SFS can do double buffering on restore when backup is from another filepool. For a 32mb machine try setting CATBUFFERS 5000 this will reduce time to reapply changes to catalog.


Multiple File Pools

  • maximum recommended enrolled users per file pool:

    • (# system defined users / # system active users) X 300
      system defined user
      - defined in system directory
      system active user
      - active over 1 minute interval

    • does not apply to R/O file pools

    • assumes normal CMS interactive

Speaker Notes

There is a practical upper limit to the rate at which a server can process requests. This has been expressed in the following formula. System defined users are system CP directory entries for your system. Active users is the average # of users during peak hours who have interacted with the system during a one minute interval. This can be found using monitor output such as is provided by VMPRF.

 SYSTEM_SUMMARY_BY_TIME report, USERS ACTIV column
The gating factors for this calculation are 1) involuntary rollbacks; 2) checkpoint processing. Catalogs are shared, so even if unique data there are locks and potential for deadlocks. Multiple filepools doesn't mean duplicating data.

Monitoring Performance

  • CP monitor data

    • standard data by userid (USER domain)

    • SFS-contributed (APPLDATA domain)

    • VMPRF usage
      • Add SFS servers to INCLUSER file
      • Create user class for each server in UCLASS file
    • VMPRF reports

      • SFS_BY_TIME (PRF083) - big picture

      • SFS_IO_BY_TIME (PRF091) - I/O stats

      • SFS_REQUESTS_BY_TIME (PRF082) - filepool request distribution

    • VMPAF supports

  • QUERY FILEPOOL STATUS or QUERY FILEPOOL REPORT (added in Rel 2)

    displays information in the following groups:

    • SFS counters (same as APPLDATA)

    • CRR counters (same as APPLDATA)

    • File pool compare information

    • File pool information

    • Currently defined minidisks

    • Agent information

    • Log information

    • Catalog space information

  • QUERY DATASPACE

  • QUERY ACCESSORS with DATASPACE option

Speaker Notes

Overall monitoring the performance of your system is unchanged if you use SFS. Still check overall system indicators and collect SFS data shown here. Use this data for performance problem determination.

Data for history/trend analysis can come from VM Monitor data. VMPRF uses some of the SFS supplied statistics and combines with other monitor data to produce 3 different reports.

VMPAF will use VMPRF Summary files, so that one can access all the individual counters if need be.

The QUERY FILEPOOL STATUS command (or new ones in VM/ESA 1.2.0) can be used for immediate snapshot of SFS server. The same counters and timers are involved.


Solving Performance Problems

  1. Confirm and Isolate the problem

  2. Take corrective action

  3. Evaluate the effectiveness

Speaker Notes

Most people understand the general performance analysis process. So this shouldn't be new. SFS fits right in here, there is no need to really do anything drastically different.


Confirm and Isolate the problem

  • is it SFS or general system problem?

    • Determine % increase average file pool request time from QUERY FILEPOOL STATUS command or monitor data

    • Determine % increase overall response time from monitor data

    • If SFS increase is large percentage of total, then probably SFS problem.

  • now use the Symptoms/Causes Table from VM/ESA Performance Manual.
SymptomPossible CausesPage
High CPU Time ... ...
High Block I/O Time Not enough catalog buffers163
... ... ...
High Other TimeInsufficient real agents
Too much server paging
Server code not in saved seg
Server Priority too low
169
169
170
171

Speaker Notes

To make the determination whether it is an SFS or a general system problem, compare the percentage increase in average file pool request service time to the percentage increase in average response time. Average file pool request service time is displayed in the SFS_BY_TIME VMPRF report or can be calculated from the QUERY FILEPOOL STATUS output by dividing File Pool Request Service Time by Total File Pool Requests. If the file pool request time is much greater, then the server is probably contributing to the problem.

The symptoms/Causes table was moved to the VM/ESA Performance Manual in Release 2. Prior to that it was in the CMS Planning and Administration Guide.


Take Corrective Action

Symptom/Causes table will point to page with possible corrective actions.

Page 169 (VM/ESA Performance Manual)

  • Too much server paging

    • Problem description: Excessive paging in the server virtual machine is resulting in increased response times...

      ...

    • Background: When a page fault occurs in a user machine, only that user waits until the page fault is resolved. When a page fault occurs in a server machine, all users currently being processed by that server machine must wait...

    • Possible corrective actions:

      • Reserve pages (SET RESERVED)...

      • High dispatching priority (SHARE REL nnnn)

      • Use saved segment for server code

      • ...

Try ONE of the possible actions.


Evaluate for effectiveness

  • review monitor data

  • examine key performance indicators

  • if not acceptable,

    • correct actions taken?

    • look at additional improvements

Speaker Notes

After reading possible corrective actions, choose one (and only one at a time) and implement it.

An often skipped step is the validation that the fix really worked. Now on to the case study...


Case Study - VMPRF Report (PRF006)

Before

   RESPONSE_ALL_BY_TIME
   Transaction Response Time and Throughput for ALL Users
 
 
             <-----------Response Time---------------->
             <---Triv---> <--Non-Triv-->
 
 
From  To                                  Quick
Time  Time      UP     MP      UP     MP   Disp   Mean
 
09:24 09:54  0.163  0.000  69.095  0.000  9.158  38.635

Case Study - VMPRF Report (PRF083)

Before

   SFS_BY_TIME
   SFS Activity by time
 
 
                                <---Time Per File Pool Request--->
 
 
From  To            FPR   FPR                      Block
Time  Time  Userid  Count Rate   Total  CPU   Lock   I/O ESM Other
 
09:24 09:54 RWSERV1 22545 12.540 3.443 0.004 0.140 1.740   0 1.559
09:24 09:54 RWSERV2 21470 11.942 4.205 0.004 0.190 1.986   0 2.027
 
 
<----Server Utilization-------> <----Agents----->
 
 
              Page  Check                      Deadlocks
Total   CPU   Read  point  QSAM  Active Held   w/ RB
 
75.29  5.47  60.38   9.44  0.00  43.2  152.6    0
82.95  5.29  67.27  10.40  0.00  50.2  146.7    0

Speaker Notes

"BEFORE" here means before we get done fixing the system. Ideally we'd like a before the before picture where things are good, then we move to "bad". In this case, things are so bad it is obvious that there is a problem. Response time is horrible. We assume it is SFS since all users with SFS show problem.

We can look further into VMPRF reports at the SFS_BY_TIME report. It's worth spending some time here pointing out stuff. Notice that most of the categories from the symptoms and causes table map to the Time per filepool Request areas. We have 2 filepool servers. We mentioned "deadlocks w/ RB" before. point that out on last column.

Right off bat we know something is wrong since FPR total time is several seconds!! A large chunk of that is in Other. From there, we look at Utilization and see Page Read time is out of sight.


Case Study - Use of S and C Table

SymptomPossible CausesPage
High Other TimeInsufficient real agents
Too much server paging
Server code not in saved seg
Server Priority too low
169
169
170
171

  • Possible corrective actions: (from p.169)

    • Reserve pages (SET RESERVED)...

    • High dispatching priority (SHARE REL nnnn)

    • Use saved segment for server code

    • ...

Case Study - VMPRF Report (PRF008)

Before

   USER_RESOURCE_UTIL
   Resource Utilization by User
 
              Est
Userid ...... WSS  Resid  .....
 
RWSERV1      1163   1142
RWSERV2      1225   1217

  • SET RESERVED RWSERV1 1300

  • SET RESERVED RWSERV2 1300

Speaker Notes

Can go back to symptom and cause table then to pointer about "too much server paging". SET RESERVED with WSS .

We can get the value for WSS from VMPRF or INDICATE USER. And issue the above commands.


Case Study - VMPRF Report (PRF006)

Before

   RESPONSE_ALL_BY_TIME
   Transaction Response Time and Throughput for ALL Users
 
 
             <-----------Response Time---------------->
             <---Triv---> <--Non-Triv-->
 
 
From  To                                  Quick
Time  Time      UP     MP      UP     MP   Disp   Mean
 
09:24 09:54  0.163  0.000  69.095  0.000  9.158  38.635

After

   RESPONSE_ALL_BY_TIME
   Transaction Response Time and Throughput for ALL Users
 
 
             <-----------Response Time---------------->
             <---Triv---> <--Non-Triv-->
 
 
From  To                                  Quick
Time  Time      UP     MP      UP     MP   Disp   Mean
 
09:52 10:22  0.072  0.000   0.866  0.000  7.396   0.579

Case Study - VMPRF Report (PRF083)

Before
   SFS_BY_TIME                  SFS Activity by time
 
                                <---Time Per File Pool Request--->
 
From  To            FPR   FPR                      Block
Time  Time  Userid  Count Rate   Total  CPU   Lock   I/O ESM Other
 
09:24 09:54 RWSERV1 22545 12.540 3.443 0.004 0.140 1.740   0 1.559
09:24 09:54 RWSERV2 21470 11.942 4.205 0.004 0.190 1.986   0 2.027
 
 
<----Server Utilization-------> <----Agents----->
 
              Page  Check                      Deadlocks
Total   CPU   Read  point  QSAM  Active Held   w/ RB
 
75.29  5.47  60.38   9.44  0.00  43.2  152.6    0
82.95  5.29  67.27  10.40  0.00  50.2  146.7    0
After
   SFS_BY_TIME             SFS Activity by time
 
                                <---Time Per File Pool Request--->
 
From  To            FPR   FPR                      Block
Time  Time  Userid  Count Rate   Total  CPU   Lock   I/O ESM Other
 
09:52 10:22 RWSERV1 63617 35.343 0.158 0.003 0.002 0.051   0 0.103
09:52 10:22 RWSERV2 63479 35.266 0.158 0.003 0.002 0.050   0 0.103
 
 
<----Server Utilization-------> <----Agents----->
 
              Page  Check                      Deadlocks
Total   CPU   Read  point  QSAM  Active Held   w/ RB
 
39.51 11.64  15.44  12.43  0.00   5.6    9.5    0
42.52 11.81  17.44  13.27  0.00   5.6    9.6    0

Speaker Notes

Being good little performance managers we look at the after case. The response time is much more acceptable.

We need to go a step further and see if the change in Resp Time is really from what we did. In the after picture, things are much better. We see FPR total time is subsecond, where it should be.

Also notice that the FPR rate has increased. Not only are we getting better response time, but better throughput as well. The Deadlocks w/RB are still zero which is good. You can see that the number of active agents and held agents also decreased. This is all part of the change to avoid serialization from page faults.

This case study was a gross problem, but is sufficient to show the methodology.


Some Application Performance Tips

See CMS Application Development Guide (SC24-5450)

  • use direct reference vs ACCESS command:

    • few file operations -- use direct reference

    • many file operations -- do ACCESS first

  • use hierarchical directories to minimize the number of files accessed.

  • use DMSEXIFI instead of DMSEXIST when applicable

  • replace file directly instead of write temp/erase/rename

Speaker Notes

As users become more comfortable with SFS they will write or use applications that exploit SFS. It is good to understand the performance impacts.

  • A trade off between reference methods exists. If there are only a few operations use direct referencing, but we many ACCESS the directory. Per request 'direct referencing' is slightly more expensive.

  • To save virtual storage references and search overhead, minimize the number of files accessed by utilizing tree structure.

  • DMSEXIFI allows us to use info cached in end user and therefore avoid some server requests.

Understanding your application's performance

  • arrange for the file pool server to be dedicated

  • Invoke the following EXEC...
          /* Measure the specified function */
          ARG function
          "QUERY FILEPOOL STATUS"
          time=TIME('R')
          function
          time=TIME('R')
          SAY "Elapsed time is" time "seconds"
          "QUERY FILEPOOL STATUS"
    

  • Example results...
          Elapsed time is 1.3 seconds
          Q FILEPOOL STATUS (selected values)
          Initial    Final    Delta     Counter Name
          -------    -----    -----     ------------
               14       15        1     Refresh Directory Requests
            13018    13136      118     File Pool Request Time (msec)
            23795    23904      109     Total BIO Request Time (msec)
              708      727       19     I/O Requests to Read Catalog
             1726     1745       19     Total I/O Requests
    

Speaker Notes

At times you want to evaluate an application of your own or to be added to system. Foil describes method. Note in this example, the sfs time (118 milliseconds) is a small part of application time (1.3 seconds).


Summary

  • Consider performance when creating a file pool

  • Follow normal performance management methodology

  • SFS provides related performance information

    • Realtime (Query Filepool Status)

    • Monitor

  • Read the manuals and let us know the weak areas

Speaker Notes

When performance is considered upfront, there should be no performance problems. SFS performance doesn't need constant attention, but periodically check it out.

Bottom line is VM tried to make SFS performance management as painless as possible. Both by automating and by documentation. If you find this not to be the case, we need to know. We can't fix what we don't know about.

Do you want to learn even more about SFS performance management? Then check out SFS Performance Management Part II: Mission Possible. You can get this by sending a request to Bill Bitner.


References

Primary Sources (VM/ESA 2.2.0)

  • CMS File Pool Planning, Administration, and Operation (SC24-5751)

  • Performance (SC24-5782)

  • CMS Application Development Guide (SC24-5761)

Others:

  • Planning and Administration (SC24-5750)

  • CP Command and Utility Reference (SC24-5773)

  • VMPRF User's Guide and Reference (SC23-0460)

  • VMPAF User's Guide and Reference (SC23-0564)

Acronyms

CMS
Conversational Monitor System
CP
Control Program
ESA
Enterprise Systems Architecture
ITR
Internal Throughput Rate
ITRR
Internal Throughput Rate Ratio
TPNS
Teleprocessing Network Simulator
VM/XA
Virtual Machine/Extended Architecture
VM/ESA
Virtual Machine/Enterprise Systems Architecture
VTAM
Virtual Telecommunications Access Method
RTM
Realtime Monitor
VMPRF
VM Performance Reporting Facility
VMPAF
Performance Analysis Facility/VM