Photo credit Ryan Ritchie
The name 'multitasking' is a bit of a misnomer on single-CPU machines since one CPU can only execute a single set of instructions at a time. The task whose instructions are currently being executed is said to be running, while the others are waiting for their turn. Switching the CPU's execution stream from a running task to a waiting one is known as a context switch. While trust multitasking is only possible with dual-CPU machines, in order to reduce user wait times, it is possible for tasks to share resources such as CPU and main memory in order that even a single-CPU machine may appear to be running different tasks simultaneous.
There are three main scheduling strategies operating systems use to create this illusion of multitasking:
This strategy achieves the appearance of multitasking by letting the CPU run a single task until it comes to an instruction that requires it to wait for an external event, whereupon another task is executed in the meantime. The CPU scheduler can also forcibly switch the running task with a waiting task though.
In systems implementing a time-sharing strategy, tasks may be running for a set amount of time before they are required to relinquish the CPU to the next one in the queue and return to that queue themselves. It may be programmed into them to stop after that set amount of time, or these tasks may be 'kicked off' the CPU via a scheduled hardware interrupt. By switching tasks quickly and executing each for a short amount of time, it appears they are being run simultaneously.
This strategy relies on tasks being triggered by external events, so when that event occurs the current running task is paused so that the triggered task may be executed. This version of multitasking is often used to control mechanical devices such as industrial roots, where a timely response is critical.
With the modern need for multitasking, programmers now often structure their code in a way that implements applications as sets of co-operating processes which won't fail if they need to relinquish the CPU quickly, many times before fully executing. The concepts of threads was also invented in order to facilitate easy process co-operation and data exchange for this reason. It was decided that the most efficient means of achieving those was to share the entire memory space, thus threads are processes which run in the same memory context as each other to allow for efficient co-operation and exchange.