In a distributed computing system, it is natural to regard the nodes as independent processes that execute concurrently. In such a system, agents communicate with each other to coordinate their activities. Communication may be through synchronization or message-passing. Synchronization implies a simultaneous two-way exchange of information (like a telephone call) while message-passing involves one-way transfers that require acknowledgments for the sender to be sure that a message has reached the recipient (like a normal letter sent by post).
However, even in a single processor system, it is often convenient to regard different components of a program as executing in parallel. Consider, for instance, an interactive application like a web browser. When you are downloading a page that takes a long time to retrieve, you can usually press the Stop button to terminate the download. When programming a browser with such a capability, the most natural model is to regard the download component and the user-interface component (that reacts to button clicks) as separate processes. When the user-interface process detects that the Stop button has been clicked, it notifies the download process that it can terminate its activity.
On a single processor, the run-time environment will allocate time-slices to each of these ``logical'' processes. These time-slices will be scheduled so that all the concurrent processes get a chance to execute. However, there is no guarantee about the relative speeds at which these processes will run--one process may get 10 more time-slices than another in a given interval.
Normally, each concurrent process comes with its own local variables. Thus, when the run-time environment switches from one process to another, it has to save the state of the first process and load the suspended state of the second one. Often, however, it is simpler to assume that all the concurrent processes share the same set of variables. Thus, the processes interact via a global ``shared memory''. This makes it possible to switch from one process to another relatively easily, without an elaborate context switch involving all the variables defined in the processes. In the literature, these kinds of processes that share a global memory are often called ``threads'' (short for ``threads of execution'').
We shall study concurrent programming in the framework of threads that operate with a global shared memory. In the rest of these notes, the words thread and process will be used synonymously.