User:Douglas Gross/Multitasking

From Wikipedia, the free encyclopedia

Soul Golem 10:05, 5 February 2006 (UTC) Douglas Gross

A process is a sequence of steps that are the smallest part of an operation necessary for the operation to complete a single task. When you are running multiple programs on your computer, you are multitasking. While the processor can manage multitasking, it does not actually handle more than one process at a time. A program is made up of more than one process, and each process is allocated a certain amount of processor time before the processor works on another process. When the processor has made its rounds working on the other processes, it returns to the process it was originally working on. Only computers with more than one processor, or the new multicore processors with hyperthreading capability can run more than one process simultaneously.

The illusion that the computer is actually running a number of processes simultaneously is what is widely known as multitasking. The two types of multitasking are cooperative multitasking and preemptive multitasking. Cooperative multitasking' was used with Windows 3.x and MultiFinder, but the problem with cooperative multitasking is that the application using the processor must release the processor for use. The application does not do this until it is completely finished with the processor. The only advantage to this was that the user could schedule tasks to be performed by the CPU. If other processes are waiting to use the processor and the processor hangs, however, the entire system becomes unstable. In preemptive multitasking, each program is allocated a small amount of time by the OS (Operating System) in turns until the programs' processes are complete.

The way that preemptive multitasking changes the order of execution is through using interrupts. If another process needs to execute ahead of its scheduled time an interrupt is generated. An interrupt is also generated with each pulse of the computer's internal clock. Each time an interrupt is generated the computer can deal with it in one of two ways. It can remove the interrupted process from the processor, or it can disable the process. If the process is disabled, it will execute until it is finished running its critical section, when the process should enable interrupts on itself again. If the software running the process is not written to reenable interrupts again after it has completed its critical section, other processes will not have access to the space the process occupies on the processor and process table. Because of this, interrupts are only disabled if their processes correspond to the functionality of the OS and are consequently dealt with by the OS. Windows 95, Windows NT, Amiga OS, UNIX and OS/2 use preemptive multitasking.

Multi-user, multitasking operating systems optimize the CPU to performance rates of up to 100%, but require the memory to be partitioned into different sections so that each section can hold a single process. Memory is partitioned into fixed-sized or variable-sized partitions. When processes become too large or numerous for RAM (Random Access Memory) they are stored in virtual memory on the computer, which is a memory paging file on the operating system that uses hard disk space to simulate RAM (physical memory). Virtual memory, however, is much slower than physical memory. A virtual memory paging file is referred to as a VMP.

When you have a computer with more than one processor, multiple processors handle multiple processes or threads simultaneously by sending them to different processors. Processes are split into multiple threads that will independently execute, and thus processes will execute faster. A system with more than one processor, or a multicore processor, is multi-threaded. Threading is a programming technique that can be applied to Symmetric Multi-Processing systems (SMPs), which are used with multiple processors.

Resources are shared and the workload is balanced among processors in SMP. Multiple processors in SMP share an I/O bus and memory, and a single OS (Operating System) controls all the processors. The OS must be multi-thread enabled, however, and the application must be multi-threaded. Systems that utilize SMP are good for network servers with large numbers of users, but SMP requires SMP hardware with multiple processors.

Parallel processing is the use of threading on multiple processors or multicore processors to obtain better processing speeds. While a multicore processor that is used used for parallel processing is referred to only as a parallel processor, a computer that utilizes a significantly large number of processors for processing is Massively Parallel Processing (MPP).

Processes can be in any one of five states assigned by the computer:

  • New: When a process is first created
  • Ready: When it awaits execution by a processor
  • Running: When a processor is executing the process
  • Blocked or Waiting: When a processor is waiting for input (such as from a user or other process) or when it receives an interrupt signal from another process with higher priority
  • Termination: When a process has finished execution or has encountered an error

The processor scheduler uses the process table to organize the order in which processes will be assigned to the processor, for how long, and how to best optimize CPU time. Just as there is cooperative and preemptive multitasking, there is cooperative (non-preemptive) and preemptive process scheduling.

In cooperative scheduling a process is executed until termination, and other processes wait until the process in execution completes. No interrupts are used, which allows cooperative scheduling, but if a process hangs the system becomes unstable. It is used when a single process needs to be running all the time on a specific computer on a network.

In preemptive scheduling a running process is temporarily blocked during execution by a higher priority process using an interrupt. If a process hangs, another process replaces the hanging process and the processor continues to work. Windows 2000 and Linux use preemptive scheduling.

In first-in, first-out (FIFO) scheduling processes that are initiated first have highest priority, and priority is assigned consecutively as each new process is added. More important processes wait until other processes terminate. This process scheduling is for a single-program environment where the computer is designated one purpose, running only one program at a time.

Round-robin scheduling is where each process executes for a fixed amount of time determined by the processor. When a process is executed round-robin, it must resume execution from its previous state. The process was previously executed and returned to the waiting state after making progress, and so it must be initiated again from the waiting state exactly the way it was after previously processed. The CPU registers and the stack must be the same when they are again restored round-robin, which means the state of the process must be saved when it is stopped. The stack is the software used to store the processes of protocols. A CPU register is where data must be stored to be processed. Processes of protocols are formats in which devices exchange data, such as:

  • the type of error checking to be used
  • data compression method, if any
  • how the sending device will indicate that it has finished sending a message
  • how the receiving device will indicate that it has received a message

In round-robin, the time it takes to save and restore processes and synchronize the timer is called scheduling overhead. If less or more time than is necessary is scheduled for processes, the processor will slow down.

Priority scheduling is where each process is assigned a priority and dealt with according to priority. Two processes with the same priority are scheduled round-robin. Some priorities are assigned to a group of processes using round-robin.

Shortest-job-first scheduling is where the process that takes the least amount of time is processed first. While this reduced the average waiting time, it secludes the processor for one process. Shortest-job-first is used particularly for batch jobs, which require no user interaction and continue until the batch series is processed. One example of batch processing is credit card billing. Instead of sending you a bill for each purchase, batch processing totals all your purchases and sends you one bill at the end of the billing cycle. The shortest-job-first scheduling algorithm is used when all jobs are prepared for execution and do not require preliminary processing.

In shortest-time-remaining scheduling processes are scheduled and preempted to allow processes that will use the least amount of processor time to take priority over other processes. The schedule is adjusted every time a new process arrives.

Multi-level queue scheduling is where processes are separated by type and put into separate queues to be processed using the required algorithm, such as FIFO, round-robin, priority scheduling, shortest-job-first or shortest-time-remaining. Each level of queues has a priority higher than the others, but it is possible for the priority of a process to change during execution.

[edit] See also

[edit] References