Unit Control Block
From Wikipedia, the free encyclopedia
In IBM mainframe operating systems from the z/OS line, a Unit Control Block (UCB) is a memory structure, or a control block, that describes any single input/output peripheral device, or a unit, to the operating system.
A similar concept in Unix-like systems is a kernel's devinfo
structure, addressed by a combination of major and minor number through a device node.
Contents |
[edit] Overview
During initial program load (IPL), the Nucleus Initialization Program (NIP) reads necessary information from the I/O Definition File (IODF) and uses it to build the UCBs. The UCBs are stored in system-owned memory, in Extended System Queue Area (ESQA). After IPL completes, UCBs are owned by the I/O Subsystem (IOS). Some of the information stored in the UCB are: device type (disk, tape, printer, terminal, etc...), address of the device (such as 1002), subchannel identifier and device number, channel path ID (CHPD) which defines the path to the device, for some devices the volume serial number (VSN), and a ton of other information.
The actual I/O at the lowest level is performed by an SIO assembly instruction kicking off a channel program. Since the SIO instruction is privileged, it is represented in user space by an SVC supervisor call instruction usually executed via Execute Channel Program (EXCP) macro . In a distant past, applications may have performed their I/O this way.
Today, if a device is shared between programs, separation of users requires that user programs are prevented from doing this. When any task opens, closes, reads, writes, gets, puts, etc... a data set residing on the device, it calls a set of runtime library routines generally referred to as access methods, providing the device address to them. The UCBs are used in the lower half of the access method complex. IOS is a program, that actually performs SIO on behalf of user space programs, as requested[clarify] by those methods. A program is usually put to sleep by the operating system. When the I/O completes, the task is woken up and continues on its merry way until the next time, oblivious that it was ever asleep.
[edit] Handling parallel I/O operations
Historically, UCBs were introduced in 1960s with OS/360. At these times, memory was expensive, so a device addressed by UCB was typically a physical hard disk drive or tape drive, with no internal cache. Without it, the device was usually grossly outperformed by the mainframe's channel processor. Hence, there was no reason to execute multiple input/output operations at the same time, as these would be impossible for a device to handle physically.
And to this day, while an I/O is active to a device, there is a flag in the UCB to indicate that the device is busy. IOS handles all the serialization, and does not issue any other I/O to the device. It places the requests in internal IOS Queue (IOSQ) to wait its turn. When the UCB/device is no longer busy, the IOS will look in the queue and select the next I/O from start of the queue. This activity continues until there are no more I/Os waiting in line for that particular device.
[edit] Workload Manager
In days gone by, there was no real way for the operating system to determine if a waiting I/O was more, or less, important than any other waiting I/Os. I/Os to a device were handled first in, first out. Sometime during the life of OS/390, Workload Manager (WLM) was introduced, that added "smart" I/O queuing. It allowed the operating system, using information provided to WLM by the systems programmer, to determine which waiting I/Os were more, or less, important than other waiting I/Os. WLM would then, in a sense, move a waiting I/O further up, or down, in the queue so when the device in question was no longer busy, the most important waiting I/O would get the device next. WLM improved the I/O response to a device for the more important work being processed. However, there was still the limit of a single I/O to a single UCB/device at any one time.
[edit] Parallel Access Volume
With modern peripheral devices, the fact that access to the device is serialized below UCB level have become an important bottleneck. For example, what a modern disk subsystem provides to z/OS as an illusion of "physical DASD device", is usually in fact a portion of a large disk array, equipped with its own cache memory. It is capable of executing multiple operations at a time: some operations are promptly serviced purely with the controller's cache memory, others spread across many of a disk array's drives. Only a small part of concurrent I/Os to the disk volume device actually competes for a single physical magnetic head. Executing many I/O operatios in parallel is not only possible, but it is recommended, because such load is effectively "pipelined", greatly increasing the overall utilization of a disk subsystem.
Step in Parallel Access Volume (PAV). With appropriate support by the DASD hardware, PAV provides support for more than one I/O to a single device at a time. For backwards compatibility reasons, operations are still serialized below UCB level. But PAV allows the definition of additional UCBs to the same logical device, each using an additional alias address. For example, a DASD device at base address 1000, could have alias addresses of 1001, 1002 and 1003. Each of these alias addresses would have their own UCB. Since there are now four UCBs to a single device, four concurrent I/Os are possible. Writes to the same extent, an area of the disk assigned to one contigous area of a file, are still serialized, but other reads and writes occur simultaneously. The first version of PAV the disk controller assigns a PAV to a UCB. In the second version of PAV processing, WLM (Work Load Manager) re-assigns a PAV to new UCBs from time to time. In the third version of PAV processing, with the DS8000 controller series, each I/O uses any available PAV with the UCB it needs.
The net effect of PAVs is to decrease the IOSQ time component of disk response time, often to zero. As of 2007, the only restrictions to PAV are the number of alias addresses, 255 per base address, and overall number of devices per logical control unit, 256 counting base plus aliases.
In smaller computers using SCSI, no comparable problem existed, as the SCSI storage devices from the start had the capability of SCSI command queueing.
[edit] Static versus dynamic PAV
There are two types of PAV alias addresses, static and dynamic. Static alias address is defined, in both DASD hardware and z/OS, to point to a specific single base address. Dynamic means that the number of alias addresses assigned to a specific base address fluctuates based on need. The management of these dynamic aliases is left to WLM, but only if WLM runs in goal mode. On most systems that implement PAV, there is usually a mixture of both PAV types. One, perhaps two, static aliases are defined for each base UCB and a bunch of dynamic aliases are defined for WLM to manage as it sees fit.
As WLM watches over the I/O activity in the system, WLM determines if there is high contention for a specific PAV-enabled device (base and alias UCBs are busy and work is piling up in the queue). If there is high contention, WLM will try to move aliases from this base address to the UCBs with higher contention.
Another problem may be certain performance goals are not being met, as specified by WLM service classes. WLM will then look for alias UCBs that are processing work for less important tasks (service class), and if appropriate, WLM will re-associate aliases to the base addresses associated with the more important work.