Mark Lorenc's Home Page
My career with IBM started in 1981. My initial job was in Poughkeepsie, NY as a tester on the VM/HPO product. My first project was to test the support for >16Meg of storage on HPO 2.5. Wow! Who could possibly need more than 16Meg of storage on a computer? I moved with VM to Kingston, NY in 1983 and worked in VM/HPO development until moving to Endicott, NY with VM/ESA in 1990.
My Current job is working on CP RCPU (real CPU support), SPOOL, HSERV (Host Services), and VSIM (Virtual Simulation) subsystems of the IBM z/VM product. I do a combination of service and development work in these areas of the Control Program.
I am now the technical owner of the z/VM scheduler and dispatcher as well. On August 20, 2002 the U.S. Patent Office issued a patent (number 6,438,704) for some of the work John Harris and I did on the VM scheduler. If you'd like to read the patent, go to the patent office search web site and enter the patent number from above.
Over the years I have been involved in many VM releases and projects, first as a tester and then as a developer. Here is a list of the projects I worked on. This list may bring back some memories to those of you who have been long time VM community members.
- Function tester in Poughkeepsie, NY
- VM/HPO 2.5 Greater than 16M Support
- Tester in Kingston, NY
- VM/HPO 3.0 retrofit on new VM/SP release
- VM/HPO 3.4 Swapper Paging Support (team leader)
- VM/HPO 4.0 retrofit on new VM/SP release
- Developer in Kingston, NY
- VM/HPO 3.6 multi-exposure cache DASD device support
- VM/HPO 5.0 SPOOL enhancements
- DIRMAINT release 3 (developer -- team leader)
- DIRMAINT release 4 (developer -- team leader)
- Developer in Endicott, NY
- VM/ESA 1.2.2 SPXTAPE (team leader)
- z/VM 3.1.0 z/Archtecture support (VSIM team leader)
- z/VM 4.1.0
- z/VM 4.2.0
- z/VM 4.3.0
- z/VM 4.4.0 Scheduler Lock Contention Reduction
- z/VM 5.1.0
- z/VM 5.2.0
- z/VM 5.3.0 Support for 32 CPUs and Scheduler Lock Contention Reduction part 2 (team leader) and Specialty engine support (associated U.S. patent 7,500,037)
- z/VM 5.4.0 Specialty engine support part 2
- z/VM 6.2.0 Live Guest Relocation development team leader (associated U.S. patent 8,533,714)
- z/VM 6.3.0 HiperDispatch -- More efficient utilization of CPU hardware resources for dispatched work
- z/VM 6.3.0 APAR VM65586 CPU Scalability development team leader -- remove performance inhibitors (mostly MP effects) so z/VM can use larger numbers of logical processors efficiently. For more details, go here. (associated U.S. patent 9,411,630).
- z/VM 6.4.0 Team leader for Guest Large Page (1 Meg pages) support
You can read about one of my recent projects called Live Guest Relocation in an article by me that starts on page 38 of the online magazine here. U.S. patent 8,533,714 describes how relocation domains are used to allow relocation of a guest among z/VM systems running on hardware with different capabilities. There is another, more recent article by Gabe Goldberg on the Destinationz website here.
One of my projects in the distant past was as the team leader for the development of SPXTAPE. Click on the following for an in-depth description of SPXTAPE; what it is and how it works.
OK, SPXTAPE is not especially new since it became available with VM/ESA 1.2.2 but it's probably new to some of you. Check this out if you use SPTAPE and want a faster/better way.
As a matter of fact, SPXTAPE can be used in conjunction with a package called SPFPACK from the VM Download Library to do some useful things. If you need a way to drain all SPOOL files from a particular CP Owned DASD volume so that it can be taken offline, you might want to take a look at the SPFPACK package.
In the z/VM 3.1.0 project I was team leader for the VSIM portion of the 64-bit VM development project to support the new IBM zSeries machines. This large software development project was part of the development of z/VM. The VSIM and Storage Management teams of our organization spent several years on the development of the 64-bit portion of this new VM product.
In the z/VM 4.4.0 release, I wrote the code necessary to decrease contention for the scheduler lock. The scheduler lock is a spin lock that can be held by one processor at a time in a multi-processor configuration. Contention for this lock was reduced by creating a new lock for TRQBK management and removing that function's serialization out from under the scheduler lock.
In the z/VM 5.3.0 release, I was responsible for the team who upgraded CP's support to allow use of 32 CPUs. This involved functional changes as well as performance improvements. The performance item was modifying how frequently the scheduler lock experienced contention. We did this by creating a new type of spin lock that can be held shared by multiple processors as well as exclusively by one processor at a time. U.S. patent 7,500,037 was issued for this shared spin lock mechanism.
Feel free to contact me at firstname.lastname@example.org