Paged virtual memory

So we've alluded to page swapping already. Some pages are stored in main memory frames, and can be accessed directly by running threads; some are swapped out to disk, and must be read into memory before they can be accessed.

All operating systems which use paging must somehow find free space on the disk to store swapped out pages. Some, for example most versions of UNIX, use a fixed area of the disk, a swap partition; others, like Windows'95, store their pages in ordinary files in the filesystem. Either way we will refer to the swap file as the place where pages are stored on disk.

Sometimes we talk of the memory hierarchy. At the lowest level of the hierarchy is the disk, which is slow, yet capacious, next is main memory, which is several orders of magnitude faster, more expensive and smaller, then a few levels of cache memory, which is faster than main memory, (but smaller and more expensive), at the top of the memory hierarchy, are the processor registers, which are tiny (<1K), but blisteringly fast. Ignoring the filesystem for the moment, it is convenient to think of each higher layer as being a fast cache to speed access to the layer beneath.

Registers cache information (variables) held in main memory; main memory caches information (pages) held on disk.

Since memory technologies typically force us to choose between fast, small memories and slow, capacious memories, this hierarchy allows us to leverage the speed of fast caches with the capacity of large backing stores: we get the best of both worlds. We can do this because of the principle of locality. Generally, executing code does not make random reads and writes of memory---accesses to memory tend to be localised in small areas.

With processes, we call the particular locality with which the process is working, its working set (of pages). The working set changes over time, as different parts of the program are executed. If a process's working set is all in memory (as opposed to swapped out to disk), the processes threads can execute very efficiently. Thrashing occurs if pages from the working set of the process keep getting swapped out to disk, and must be continually swapped back in again. We must strive to avoid thrashing.

Ideally, the OS would

Both of these points require the OS to Know The Future, which would require programming more occult and magical than is usually seen in operating systems kernels.

The decision to bring pages into memory is usually based on when a program asks for that page. This is known as demand paging.

The page replacement algorithm is the code in the OS which decides which pages should be moved out to disk when some new frames are required. (Physical memory smaller than total no of pages, so tricky decision.) The page replacement algorithm, like the scheduler, is a piece of code which must try to juggle the various needs of the processes and OS, with the physical constraints of the system, to try to ensure maximum performance.

Some common page replacement algorithms are:

Issues with paged virtual memory systems:


last updated 18 March 1998