Description of SFSULIST
Download count:
12 this month, 6382 altogether.
Downloads for SFSULIST:
VMARC archive: v-134K
From Kris Buelens, IBM Belgium
SFSULIST is a FILELIST like application to handle your SFS enrolled users more easily. The effort to view a user's DIRLIST is just hitting PF11; FILELIST of the top directory is PF23 ... For the change history, refer to SFSULIST Change history
Here's a cut&paste of the screen image
KRIS SFSULIST A1 V 160 Trunc=160 Size=42 Line=1 Col=1 Alt=0 Group No. Blocks In-Use Blocks Free Pool: SFSWERK ? 1 117,493 - 76% 37,437 ? Log full: 4% ? 2 622,626 - 42% 862,696 ? Catalog: d54% i82% Cmd Userid StorGrp Block Limit Blocks Committed Admin Connected Type _GUIMON 2 1,000 38-04% no no bfs _JAVA 2 20,000 6,223-31% no no bfs _JNRCMS 2 1,000 29-03% no no bfs _NETREXX 2 1,000 396-40% no no bfs GUY 2 1,000 0-00% YES no SFS HTTPD 2 200,000 100,487-50% no 4x SFS HTTPDBFS 2 100,000 1,029-01% no no bfs KBCWERK 2 20,000 5,592-28% YES 1x SFS U14756 2 1,000 0-00% no no SFS U14756W 2 1,000 4-00% no no SFS U23638W 2 1,000 6-01% no no SFS U23759W 2 1,000 14-01% no no SFS VMCOLL 2 7,000,000 5,392,676-77% no 2x SFS FTPSERVE - - - YES no - KRIS - - - YES 1x - MAINT - - - YES no - SCHEDUL - - - YES no - SMSMASTR - - - YES no - SMSSRV01 - - - YES 3x - SMSSRV02 - - - YES 2x - TSLARUN - - - YES 1x - VMBACKUP - - - YES 3x - Slash symbols: / =pool:user. /n =user /p =pool /d =dirid (pool:user.) 1= S(Usr/Adm)2= Refresh 3= Quit 4= S(Grp/Conn) 5= S(Used/Type)6= ? 7= Backward 8= Forward 9= Sort(%) 10= 11= DIRL/FILEL 12= Sort(size) ====> X E D I T 1 File |
By using the / symbols, you can execute any command you like. Some examples:
filelist ( SFSULIST starts FILEL * * /) filelist * listing access / z modify user +2000 for /u /p ( you know the syntax) modify user +2000 ( SFSULIST helps you ) delete user /up ( you know the syntax) delete user ( SFSULIST helps you ) grant auth * * / to myfriend (WRITE q filepool disable filespace for /up ( you have to know syntax)
Before you shoot the tool smith, please read the following:
The syntax of many SFS commands is not "obvious", surely not
for the SFS admin commands.
SFSULIST tries to help a bit by looking at the commands you enter.
Making a perfect "default substition" as in FILELIST is not possible:
- For file and dirid related commands, the best default "slashsub" is pool:userid.
- For SFS Admin commands though, the best default would be userid pool
- We intercept DELETE, ENROLL, and MODIFY USER and insert "pool:user." or "user pool:" as appropriate.
- We also recognize FILELIST: when you enter "FILEList", we will execute "FILELIST * * pool:user."
- For all other commands, we decided to select the file related substitution as default behaviour. That is: "pool:user." is appended to the command when no / is in the command you enter. A / on its own is replaced by "pool:user." too.
Listing your SFS servers
The SFSLIST EXEC lists all your SFS servers. You can (re)start or stop them, or use SFSLIST as a bootstrap to SFSULIST:
+--------------------------------------------------------+ | SFS servers for BRUVMBRS | | Commands you can enter: | | PF11 List SFS users | | S Stop an SFS server | | R Restart the SFS server | | L stop and Logoff the server | +--------------------------------------------------------+ | _ VMSERVB SFSBCRS: active --> stg 2 81% | | _ VMSERVR VMSYSR: active | | _ VMSERVU VMSYSU: active | | _ VMSERVS VMSYS: active | | _ VMSERSMS DFSMS: HCPTHL045E VMSERSMS not logged on | +--------------------------------------------------------+ | Enter=Submit Request(s) | | PF: 2 Refresh 3 Quit 11 SFSUlist | +--------------------------------------------------------+
It can be observed above that a quick check is performed to detect possible problems (like Storage Pool 2 of SFSBCRS that is 81% full). SFSLIST performs checks on filepool level only, not on filespace level.
As there is no method to easily find all SFS servers on a VM system, we
want a control file (SFS SERVERS) defining your known SFS servers.
This way we can also detect SFS servers that are down or not logged on.
A bit more information can be found in the SFSLIST EXEC.
Extra notes for SFSULIST:
- The / symbol substition is a bit more elaborated than in FILELIST.
We allow you to construct dirids using the / symbols.
We understand for example commands like:
pipe < some file /d.subdir1.subdir2| ... becomes pool:user.SUBDIR1.SUBDIR2| ... filel * * /p:maint.userdefs./u.profiles becomes pool:MAINT.USERDEFS.user.PROFILES
- When the % full is 70% or higher, we start to color the % values: red from 90%, pink from 80% and yellow from 70% on (we don't do this for the LOG as SFS schedules a backup when it is higher than its threshold).
- You should remark the ?-marks in the header of the display (where we
list the storage group information for the SFS server.
These ?-marks invite you to place the cursor there and press enter. (The ?-mark signs are not defined as unprotected as we want the "cursor home" key to set the cursor on the line with the first user). SFSULIST will then issue an appropriate Q FILEPOOL command:Q FILEPOOL LOG When the cursor is near the %LOG full Q FILEPOOL CATALOG When the cursor is near the % full for catalog data and index part ( "d54% i82%" above) Q FILEPOOL MINIDISK When the cursor is before column 50 in a line describing a storage group.
The Q FILEPOOL MINIDISK command is not exactly the same as when you execute it natively:- We only display the minidisks of the selected storage group
- When we can obtain the userID of the SFS server, we will display
for each minidisk the number of cylinders or FB-512 blocks
and the pack on which the SFS minidisk is located. Here a sample of
the SFS server that started its life in VM/SP Rel 6, back in 1988.
SFS72 File Pool Minidisks Start-up Date 06/12/07 Query Date 06/12/07 Start-up Time 15:10:04 Query Time 15:39:45 ============================================================================== FILE POOL MINIDISK INFORMATION 200 Maximum Number of Minidisks 34 Minidisks in Use ============================================================================== STORAGE GROUP MINIDISK INFORMATION Storage Minidisk 4K Blocks 4K Blocks Virtual Cylinders CP Group No. Number In-Use Free Address or Blocks Volume 2 3 43572 - 90% 4968 0501 270 VTE004 2 4 44135 - 91% 4405 0502 270 VTE001 2 7 13724 - 92% 1253 0503 84 VTE001 2 16 14977 - 100% 0 0504 84 VTE003 2 18 13832 - 92% 1145 0505 84 VTE001 2 19 13030 - 87% 1947 0506 84 VTE001 2 20 10867 - 73% 4110 0507 84 VTE003 2 21 8465 - 57% 6512 0508 84 VTE001 2 23 669 - 4% 14308 0509 84 VTE001 2 25 13876 - 93% 1101 050A 84 VTE003 2 26 132 - 1% 14845 050B 84 VTE001 2 27 24772 - 92% 2193 050C 150 VTE002 2 30 33088 - 92% 2864 050D 200 VTE005 2 33 78625 - 87% 11273 050E 500 VTE006 ============================================================================== STORAGE GROUP MINIDISK TOTALS Storage 4K Blocks 4K Blocks Group No. In-Use Free 2 313764 - 82% 70924 *** Note: to get better IO concurrency, you should not place multiple *** minidisks of an SFS server on the same pack. 10 of 14 minidisks share *** packs in storage group 2; only 6 concurrent IO instead of 14 possible. *** Space allocated per physical volume: 138,402 4K blocks on VTE001, on 7 Mdisks 26,965 4K blocks on VTE002, on 1 Mdisks 44,931 4K blocks on VTE003, on 3 Mdisks 48,540 4K blocks on VTE004, on 1 Mdisks 35,952 4K blocks on VTE005, on 1 Mdisks 89,898 4K blocks on VTE006, on 1 Mdisks
- When we have the CP minidisk information, we will warn you when the
server has multiple minidisks on the same pack.
Minidisks on the same pack can make that SFS seems to refuse to use
certain added storage pool minidisks
(look at minidisks 0509 and 050B above).
With well spread minidisks, SFS can perform better.
One reason is that the SFS server tries to balance I/O to its minidisks on a given pack. With other words: SFS knows it is useless to ask CP to perform an I/O to pack that is already handling another I/O. With the extremes:- when SFS has 15 minidisks on 15 packs, it can start 15 IOs at once
- when SFS has 15 minidisks on 1 pack, it can start only 1 IO at a time
- When the cursor is beyond column 15, and when we have the CP minidisk
information, and when multiple minidisks share the same packs, we
will calculate how much space SFS has available per physical pack
(as in the figure above).
In the example, you can see that pack VTE006 has much more SFS data to handle than VTE001. So, if one would not want to spread the SFS minidisks over more packs, it would be better to move some minidisks from VTE006 onto VTE001 and VTE002. - A possible pitfall, or "why doesn't this work for me?"
We use "CP Q RESOURCE fpool" to find the userID of the SFS server, what means we are unable to get this extra information when the SFS server is on a remote VM system. Similar, when you'd use entries in UCOMDIR/SCOMDIR names you can use a kind of alias to address a filepool. We for example have unique SFS names all over our VM systems (SFS72, SFS71, SFS75, etc; what was once a requirement for remote SFS usage). However, with an entry in SCOMDIR NAMES we can reach the "local" SFS named SFSxx using SFSD as filepoolid. As Q RESOURCE only knows the real name, we can't find the server's userID.
SFSULIST Change History
Change in V 1.14 28 February 2011
Add PF-keys in SFSLIST and SFSULIST if the SFSPSTAT EXEC is found. SFSPSTAT is part of the SFSKTOOL package, and permits to produce GDDM graphics of SFS space usage, if you collect the right SFS statistics (refer to SFSKTOOL).
Change in V 1.13 9 March 2010
Improvement for SFSLIST: You can now code an * as nodeid in SFS SERVERS, it indicates a line valid for all your VM systems. To differenciate from comment lines such an * should not be placed in column 1.
+--------------------------------+ | *nodeid userid fpoolID WngPct| | VMKBBR01 VMSERVS SFS72 80 | | * VMSERVR VMSYSR 70 | | * VMSERVS VMSYS 70 | | VMKBCT01 VMSERVD SFSD 75 | +--------------------------------+Remark: a generic line can be overruled by a specific line, this is illustrated by VMSERVS above: its filepool ID is VMSYS, except on VMKBBR01 where it is SFS72.
Change in V 1.12 1 February 2010
Fix to SFSLIST: when no SFS servers are found in file SFS SERVERS, SFSLIST is supposed to list all users named VMSER*. In the Q RESOURCE scan, SFSLIST must use CP's nodeid, not the nodeid from IDENTIFY.
Change in V 1.11 5 Januari 2010
Two bug fixes in SFSLIST.
Change in V 1.10 11 December 2009
Addition of SFSLIST, lists all your SFS servers.
Change in V 1.9a 8 February 2008
Avoid reporting 00% full for storage groups that are 100% full.
Change in V 1.9 5 January 2008
When requesting the space allocated per physical volume, the counters were not reset to 0 at volume switch. It incorrectly displayed this (instead of what is illustrated above)
*** Space per physical volume: 138,402 4K blocks on VTE001 , on 7 Mdisks 165,367 4K blocks on VTE002 , on 8 Mdisks 210,298 4K blocks on VTE003 , on 11 Mdisks 258,838 4K blocks on VTE004 , on 12 Mdisks 294,790 4K blocks on VTE005 , on 13 Mdisks 384,688 4K blocks on VTE006 , on 14 Mdisks
Change in V 1.8 4 December 2007
When SFSULIST was used by a non-SFS-administrator, the area on the screen were SFS catalog usage would be displayed got filled with not initialized REXX variables.
Change in V 1.7 11 June 2007
In this version, a bug is fixed and new features added:
- Avoid flagging user
with "*-1" and *** cannot execute commands for ** each time the Enter-key is pressed.- The header lines now include the %used of catalog data and index part.
- When %used of catalog or storage groups is high, we display these numbers in a special color to get attention.
- When press enter with the cursor placed in a header line that lists a storage group, the CATALOG usage or the LOG usage, we issue Q FILEPOOL MINIDISK, Q FILEPOOL CATALOG or Q FILEPOOL LOG
Change in V 1.6 8 June 2007
Display user There are a few, more and less, important changes:
Some colors and aligments were changed a bit
Changes in V 1.5 6 October 2003
DMSJNL647E Localid not specified for U15568 at KBMEMO in KRIS NAMES file.
In this particular case, a QUERY was issued for filespace VMSTOR
what happens to be a nickname in my NAMES file.
In the SFSULIST case, no nickname should ever be used, as
SFSULIST works with real filespace names. Our bypass of this
problem is to surround 3 commands by NUCXLOAD/NUCXDROP of a fake
NAMEFIND module (thanks to Alan Altmark).
The 3 commands that we "protect" are Query, MODify USEr and DELete
USEr. For PF11 (DIRLIST) we use the NONICK option; for PF23
(FILELIST) there never is a problem (it has no nickname resolution).
Changes in V 1.4.a 24 April 2003
When not authorized to connect to a filepool, SFSULIST displayed
an empty header.
That is: the mdisk usage information was missing, and you start
guessing why this information is missing. .
SFSULIST now displays an errormessage instead of the empty header.
Changes in V 1.4 11 April 2003
SFSULIST checks the "filespace" type when you use PF11. For a BFS
filespace we try to call BFSLIST (starting DIRLIST for a BFS space
works, but is useless as it is a dead end). BFSLIST is a separate
package.
Changes in V 1.3 20 July 2001
SFSULIST now also gives an improved DIRLIST: the QSFSBLK EXEC and XEDIT
macro are added to the package.
EXEC QSFSBLK <fn ft> dirid
Example:
EXEC Qsfsblk SFS72:MAINT.BACKUP
4K blocks used for SFS72:MAINT.BACKUP : 47 (or 0.2 Meg) Migrated: 79%
You use QSFSBLK XEDIT by starting DIRLIST and entering the
QSFSBLK &ft;fn <ft>>
command in the DIRLIST command area.
When no files DFSMS migrated files are found in the listed directories,
QSFSBLK displays the number of 4K blocks used by the files and the number
of files in the directories
KRIS DIRLIST A0 V 319 Trunc=319 Size=24 Line=1 Col=1 Alt=47
Cmd Fm Directory Name/Minidisk Address 4K blocks Fils
- SFS72:KRIS. 455 92
- SFS72:KRIS.A_PSF_CENTRAAL 47 41
- SFS72:KRIS.A_VANALLES 582 99
- SFS72:KRIS.APING 90 3
- SFS72:KRIS.APPC_SECUR 42 9
- SFS72:KRIS.APPCPIPE 74 12
- SFS72:KRIS.APTFS 0 0
- SFS72:KRIS.ATSLAGENT 325 152
- SFS72:KRIS.CHAT 9 4
- SFS72:KRIS.COMPASS 640 316
- SFS72:KRIS.CSPPLI 23 8
- SFS72:KRIS.DFSMSTOOLS 66 24
- SFS72:KRIS.DISTJOBS 0 0
- SFS72:KRIS.DISTJOBS.DISTFILE 0 0
- SFS72:KRIS.DISTJOBS.DISTFILE.AATEST 12 10
- SFS72:KRIS.DISTJOBS.DISTFILE.AATEST.DISK1 0 0
B SFS72:KRIS.DISTJOBS.DISTFILE.ACCTDIR 13 11
- SFS72:KRIS.DISTJOBS.DISTFILE.ADSMV3 20 8
- SFS72:KRIS.DISTJOBS.DISTFILE.AGWPROF 9 9
- SFS72:KRIS.DISTJOBS.DISTFILE.AUTOLOG1 89 10
1= Help 2= Refresh 3= Quit 4= Sort(fm) 5= Sort(dir) 6= Auth
7= Backward 8= Forward 9= 10= 11= Filelist 12= Cursor
QSFSBLK refreshes stats; SSIZE sorts by size; SFILES sorts by nbr of files
====>
X E D I T 1 File
When at least one DFSMS migrated file is found, we display how much %
of the 4K blocks has been migrated by DFSMS.
KRIS DIRLIST A0 V 319 Trunc=319 Size=17 Line=1 Col=1 Alt=47
Cmd Fm Directory Name/Minidisk Address 4K blocks Migr
- SFS72:MAINT. 23 87%
- SFS72:MAINT.DISTJOBS 0 0%
- SFS72:MAINT.DISTJOBS.DISTFILE 0 0%
- SFS72:MAINT.DISTJOBS.DISTFILE.LOGS 10 0%
- SFS72:MAINT.ARCHIEF 50 90%
* - SFS72:MAINT.BACKUP 47 79%
- SFS72:MAINT.BATCH 8 88%
- SFS72:MAINT.RSCS 6,600 90%
- SFS72:MAINT.VTAM 10 0%
- 0FC0
Z 019D
Y 019E
S 0190
A 0191
E 0593
D 0594
- 0999
1= Help 2= Refresh 3= Quit 4= Sort(fm) 5= Sort(dir) 6= Auth
7= Backward 8= Forward 9= 10= 11= Filelist 12= Cursor
QSFSBLK refreshes stats; SSIZE sorts by size; SMIGR sorts by % migrated
====>
X E D I T 1 File
The changed DIRLIST setup looks like:
1= Help 2= Refresh 3= Quit 4= Sort(fm) 5= Sort(dir) 6= Auth
7= Backward 8= Forward 9= 10= QSFSBLK 11= Filelist 12= Sort size
Issue "QSFSBLK" (F10) to get file statistics for all SFS dirs
====>
X E D I T 1 File
Changes in V 1.2 4 April 2001
We now insert a , character to separate the number of 4K blocks what
improves readability.
Changes in V 1.2.a 4 May 2001
We try to avoid an accidental DELETE USER for the next case
You use SET FILEPOOL SFSONE
Then you start SFSULIST POOLTWO to list users in pool POOLTWO
In SFSULIST you enter DELETE USER /N
==> The command executed becomes "DELETE USER userid". What means
to CMS "delete user 'userid' in pool SFSONE"
You now get a warning message
==> Solution: you should enter "DELETE USER" or "DELETE USER /" or
"DELETE USER /n /p" and the right filepool is addressed.