Page cache
From Wikipedia, the free encyclopedia
In computing, page cache, sometimes misleadingly called disk cache, is a transparent cache of disk-backed pages kept in primary storage (RAM) for quicker access. Page cache is typically used in operating system kernels with the paging memory management, and is completely transparent to applications. All memory that is not directly allocated to applications, is usually utilized for page cache. As hard disk read speeds are low and random accesses require expensive disk seeks compared to primary storage, this is why memory upgrades to computers usually yield significant improvements in their speed and responsiveness.[citation needed] This concept should not be confused with limited cache present in the actual hard disk hardware, which is more accurately called a "disk buffer".
Contents |
[edit] Memory conservation
- For more details on this topic, see Demand paging.
Since non-dirty pages in the page cache have identical copies in secondary storage (hard disk), discarding and re-using them is much quicker than, and is often preferred to, swapping out application memory. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is done through the mmap syscall on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be pushed out of main memory eventually, leading to memory conservation.
Since cache pages can be easily dropped and re-used, some operating systems, notably Windows NT, even display some memory used for the page cache as "free" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.
[edit] Page cache and disk writes
The page cache also aids in writing to a disk. Pages that have been modified in memory for writing to disk, are marked "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the page backing the particular block is looked up. If it is already found in cache, the write is done to that page in memory. Otherwise, when the write perfectly falls on page size boundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise, the page(s) are fetched from disk and requested modifications are done.
However, not all cached pages can be written to — often, program code is mapped as read-only or copy-on-write; in the latter case, modifications to code will only be visible to the process itself and will not be written to disk.