Job control

From Wikipedia, the free encyclopedia

Job control in computing refers to the control of multiple tasks or Jobs on a computer system, ensuring that they each have access to adequate resources to perform correctly, that competition for limited resources does not cause a deadlock where two or more jobs are unable to complete, resolving such situations where they do occur, and terminating jobs that, for any reason, are not performing as expected.

Job control has developed from the early days of computers where human operators were responsible for setting up, monitoring and controlling every job, to modern operating systems which take on the bulk of the work of job control.

Even with a highly sophisticated scheduling system, some human intervention is desirable. Modern systems permit their users to stop and resume jobs, to execute them in the foreground (with the ability to interact with the user) or in the background. See for instance Job control (Unix).

[edit] History

It became obvious to the early computer developers that their fast machines spent most of the time idle because the single program they were executing had to wait while a slow peripheral device completed an essential operation such as reading or writing data. Buffering only provided a partial solution; eventually an output buffer would occupy all available memory or an input buffer would be emptied by the program, and the system would be forced to wait for a relatively slow device to complete an operation.

A more general solution is multitasking. More than one running program, or process, is present in the computer at any given time. If a process is unable to continue, its context can be stored and the computer can start or resume the execution of another process. At first quite unsophisticated and relying on special programming techniques, multitasking soon became automated, and was usually performed by a special process called the scheduler, having the ability to interrupt and resume the execution of other processes. Typically a driver for a peripheral device suspends execution of the current process if the device is unable to complete an operation immediately, and the scheduler places the process on its queue of sleeping jobs. When the peripheral completed the operation the process is re-awakened. Similar suspension and resumption may also apply to inter-process communication, where processes have to communicate with one another in an asynchronous manner but may sometimes have to wait for a reply.

However this low-level scheduling has its drawbacks. A process that seldom needs to interact with peripherals or other processes would simply hog processor resource until it completed or was halted by manual intervention. The result, particularly for interactive systems running tasks that frequently interact with the outside world, is that the system is sluggish and slow to react in a timely manner. This problem is resolved by allocating a "timeslice" to each process, a period of uninterrupted execution after which the scheduler automatically puts it on the sleep queue. Process could be given different priorities, and the scheduler could then allocate varying shares of available execution time to each process on the basis of the assigned priorities.

This system of pre-emptive multitasking forms the basis of most modern job control systems.

[edit] Real-time computing

Main article: Real-time computing

Pre-emptive multitasking with job control assures that a system operates in a timely manner most of the time. In some environments (for instance, operating expensive or dangerous machinery), a strong design constraint of the system is the delivery of timely results in all circumstances. In such circumstances, job control is more complex and the role of scheduling is more important.

[edit] Job control languages

Early computer operating systems were relatively primitive and were not capable of sophisticated resource allocation. Typically such allocation decisions were made by the computer operator or the user who submitted a job. Batch processing was common, and interactive computer systems rare and expensive. Job control languages (JCLs) developed as primitive instructions, typically punched on cards at the head of a deck containing input data, requesting resources such as memory allocation, serial numbers or names of magnetic tape spools to be made available during execution, or assignment of filenames or devices to device numbers referenced by the job. A typical example of this kind of language, still in use on mainframes, is IBM's Job Control Language (also known as JCL). Though the format of early JCLs was intended for punched card use, the format survived the transmission to storage in computer files on disk.

As time sharing systems developed, interactive job control emerged. An end-user in a time sharing system could submit a job interactively from his remote terminal, communicate with the operators to warn them of special requirements, and query the system as to its progress. He could assign a priority to the job, and terminate (kill) it if desired. He could also, naturally, run a job in the foreground, where he would be able to communicate directly with the executing program. During interactive execution he could interrupt the job and let it continue in the background or kill it. This development of interactive computing in a multitasking environment led to the development of the modern shell.

ref http://www.gnu.org/software/libtool/manual/libc/Job-Control.html