Page replacement algorithm

That mostly ended with the development of sophisticated LRU (least recently used) approximations and working set algorithms.

Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research.

In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms: Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures.

[1] Modern general purpose computers and some embedded processors have support for virtual memory.

The CPU sets the access bit when the process reads or writes memory in that page.

The operating system can detect accesses to memory and files through the following means: Most replacement algorithms simply return the target page as their result.

In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with full duplex channels to the stable storage, and cleaning was customarily overlapped with paging.

Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.

Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.

This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis.

Efficiency of randomized online algorithms for the paging problem is measured using amortized analysis.

[6] Partial second chance is provided by skipping a limited number of entries with valid translation table references,[7] and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used.

If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO.

When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location.

Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced.

The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval.

While LRU can provide near-optimal performance in theory (almost as good as adaptive replacement cache), it is rather expensive to implement in practice.

The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.

On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns.

As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations.

Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).

A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, LIRS) can be found in Megiddo & Modha 2004.

The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use.

Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out.

[citation needed] Many of the techniques discussed above assume the presence of a reference bit associated with each page.

These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold.

Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory.

A page referenced this way will be removed from the Free or Modified list and placed back into a process working set.

In the basic case, when a page is accessed by a user-space program it is put in the head of the inactive set.

The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind of medium-term scheduler)[clarification needed]