Memory paging

[1] In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages.

In CPUs implementing the x86 instruction set architecture (ISA) for instance, the memory paging is enabled via the CR0 control register.

These segments had to be contiguous when resident in RAM, requiring additional computation and movement to remedy fragmentation.

Some systems clear new pages to avoid data leaks that compromise security; some set them to installation defined or random values to aid debugging.

This minimizes the amount of cleaning needed to obtain new page frames at the moment a new program starts or a new data file is opened, and improves responsiveness.

As the working set grows, resolving page faults remains manageable until the growth reaches a critical point.

"Thrashing" is also used in contexts other than virtual memory systems; for example, to describe cache issues in computing or silly window syndrome in networking.

In multi-programming or in a multi-user environment, many users may execute the same program, written so that its code and data are in separate pages.

The first computer to support paging was the supercomputer Atlas,[9][10][11] jointly developed by Ferranti, the University of Manchester and Plessey in 1963.

The Supervisor[12] handled non-equivalence interruptions[f] and managed the transfer of pages between core and drum in order to provide a one-level store[13] to programs.

If a user runs memory-intensive applications on a system with low physical memory, it is preferable to manually set these sizes to a value higher than default.

It is required, however, for the boot partition (i.e., the drive containing the Windows directory) to have a page file on it if the system is configured to write either kernel or full memory dumps after a Blue Screen of Death.

[15] The common advice given to avoid this is to set a single "locked" page file size so that Windows will not expand it.

However, the page file only expands when it has been filled, which, in its default configuration, is 150% of the total amount of physical memory.

As soon as the expanded regions are no longer in use (at the next reboot, if not sooner) the additional disk space allocations are freed and the page file is back to its original state.

However, a large page file generally allows the use of memory-heavy applications, with no penalties besides using more disk space.

For this reason, a fixed-size contiguous page file is better, providing that the size allocated is large enough to accommodate the needs of all applications.

[17] This view ignores the fact that, aside from the temporary results of expansion, the page file does not become fragmented over time.

In general, performance concerns related to page file access are much more effectively dealt with by adding more physical memory.

If multiple swap backends are assigned the same priority, they are used in a round-robin fashion (which is somewhat similar to RAID 0 storage layouts), providing improved performance as long as the underlying devices can be efficiently accessed in parallel.

To increase performance of swap files, the kernel keeps a map of where they are placed on underlying devices and accesses them directly, thus bypassing the cache and avoiding filesystem overhead.

[20][21] When residing on HDDs, which are rotational magnetic media devices, one benefit of using swap partitions is the ability to place them on contiguous HDD areas that provide higher data throughput or faster seek time.

The default value is 60; setting it higher can cause high latency if cold pages need to be swapped back in (when interacting with a program that had been idle for example), while setting it lower (even 0) may cause high latency when files that had been evicted from the cache need to be read again, but will make interactive programs more responsive as they will be less likely to need to swap back cold pages.

If those pages do not remain in memory, they will have to be faulted in again to handle the next keystroke, making the system practically unresponsive even if it's actually executing other tasks normally.

[31] Swap memory could be activated and deactivated any moment allowing the user to choose to use only physical RAM.

The backing store for a virtual memory operating system is typically many orders of magnitude slower than RAM.

Many Unix-like operating systems (for example AIX, Linux, and Solaris) allow using multiple storage devices for swap space in parallel, to increase performance.

A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM.

In addition the operating system may provide services to programs that envision a larger memory, such as files that can grow beyond the limit of installed RAM.

This nullifies a significant advantage of paging, since a single process cannot use more main memory than the amount of its virtual address space.