Talk:Virtual memory

From Wikipedia, the free encyclopedia

Virtual memory was invented at MIT during the early 60s. It was implemented in the Multics project - a spoof on Unix.

The original purpose was two fold: Provide more real memory than actually exists and provide a virtual address space to multiple users. The virtual address space allowed multiple process to run simultaniuosly with extended isolation so that a problem within one address space would not cause trouble to a process using another virtual address space. This technique was adopted by IBM in its VM operating system and was copied by Unix and Windows. The idea was to provided a Kernel shared by everyone in read only RAM and provide a Virtual Address space that began at the end of the kernel that was read/write. Every user shared the same kernel and had their own Virtual Address space.

The page table concept was used to provide discrete page tables for each user. The same hardware concept is used is all cases.



I removed:

There is a common misconception that virtual memory is for providing more computer storage to software than actually exists. Though useful, this is not the only use. A computer's physical memory address space is shared by RAM, ROM and input/output. Of these only RAM is available for use by application software. The RAM might be spread across the system's address space, and interspersed with ROM and input/output. This layout varies from computer to computer. Without virtual memory, software would have to be modified to run on each particular computer. Virtual memory hides the physical addresses from software, permitting software vendors to sell precompiled software.
Actually, I think that is not a misconception. Systems have been built with protection and relocation but no ability to provide more apparent memory than actually exists. These were not usually called virtual memory systems.

Because I really hate it when people respond to errors in one paragraph by adding another that contradicts it. If someone can reconcile these, even crappily, please but them back in a sensical form. Tuf-Kat

I agree. One of the greatest advantages of virtual memory is providing more storage than available RAM. As it stands, the article doesn't mention it at all. I'm not enough of an expert to provide a coherant discussion (at least not right now), but not mentioning it at all makes the entry erroneous. —Frecklefoot 18:29 13 Jun 2003 (UTC)

Contents

[edit] Paging != virtual memory

is somewhat poorly organized. It confuses three separate notions:

  • the provision of more CPU-addressable memory than the machine actually has main memory (which is what is properly known as 'virtual memory'), often as part of a multi-level storage architecture
  • paging, which was added to the above mostly as a memory allocation strategy, to obviate the need for copying stuff around, and also to allow optimization of the size of the objects moved between the main memory and disk memory levels of the MLSA
  • protection of supervisor memory from user programs

You can do the first with segmentation hardware alone, and a number of early computers did so. You can do the third without either of the first two, and again some early systems (e.g. the early 360 machines) did so.

I don't have the energy at this time to rewrite this article, and the paging article, to make these distinctions clear, but will do so Real Soon Now. Noel 13:17, 13 Sep 2003 (UTC)

I've updated the introduction, which is at least a start on this, however there perhaps could be more work done on the rest of the article. Likewise, I don't have the energy to do the whole thing, but may have a crack at it when I get some spare time.Guinness 16:52, 26 November 2005 (UTC)

Thermidoreanreaction 13:09, 15 March 2007 (UTC)(TR) Someone please link computer concepts in texts for non-computer geniuses. This will help make this article more understandable for the average person. Thanks.

[edit] Remark

In the first paragraph, it tells us the computer processes are not limited to physical memory size, due to VM. This is not correct; you can't have more VM than your total physical memory, VM is just that you let more medias be available for allocatable memory.  Sverdrup (talk) 11:29, 14 Dec 2003 (UTC)

This article is using "physical memory" to mean RAM - i.e. disk is not included in "physical memory". So the statement is accurate.
I think this is just a matter of terminology. I suggest using "Primary Storage" to refer to what people generally call RAM. As a side issue, you could have more memory than the total physical memory simply by incorporating data compression into your virtual memory system. Guinness 23:03, September 3, 2005 (UTC)

[edit] Rewrite

I went to fix an error in the article (it said all TLB refills, when a translation is not in the TLB, were done under software control, which is not true - most CPU's refill the TLB cache from the page tables in main memory without taking an exception) and I simply couldn't find a simple way to fix the article (see my comments above about how it mixed up paging and virtual memory). Every time I fixed something, it made some larger part of the article not work. So I finally wound up rewriting much of the article.

It now treats virtual memory separately from paging. (A number of systems did VM without paging, most notably the PDP-11, but also machines like the GE-645, which supported both paged and unpaged segments.) It still needs more work, but IMNSHO we're closer to where we ought to be than we were before. Noel (talk) 04:41, 1 Dec 2004 (UTC)

[edit] What is virtual memory anyway?

I removed the text:

A computer usually runs multiple processes during its operation. Each of these processes has its own address space, the area in in which it stores information. It would be very expensive to give each process the entire memory space available because most processes use only a small portion of the main memory at any given time. Virtual memory divides physical memory into sections and allocates them to different processes.

because it's flat incorrect. What it describes (the division of actual physical memory between various processes) is not virtual memory, but rather plain old multiprocessing. Many early OS's (e.g. OS/360, in its early MFT and MVT versions) provided exactly what is described in that paragraph, but were most definitely not virtual memory systems.

That division of memory can be provided by "base and bounds registers" (such as provided on early PDP-10s and System/360 machines), without in any way providing virtual memory.

Both segmentation and paging have been used to provide virtual memory (the latter, as described in the article), but both are more allocation and/or implementation techniques which can be used to provide virtual memory than virtual memory in and of themselves. (I.e. you can have paging as a main memory allocation technique without having virtual memory.) Noel (talk) 19:40, 6 Dec 2004 (UTC)

My professor made it quite clear in class that the main reason we use virtual memory is not to make main memory appear larger- this was what I suggested when he asked the question so I made a strong mental note. VM was invented to alleviate the burden of managing the two levels of memory represented by main memory and secondary storage. Before VM programmers were responsible for moving overlays back and forth from secondary storage to main memory. However, in modern day computers because of modern day memory sizes a programmer would almost always be able to load his entire program into main memory without having to deal with these overlays. Rather than simply size and space issues, it has much more to do with relocation of data (allowing the same program to run in any location in physical memory), and protection of that data (preventing one process from modifying another's code/data). It is true that one of the features of virtual memory is that it gives the appearance of larger main memory but this is not often used because programs very rarely exceed modern memory capacities.
This is sort of a "six/half-dozen" argument, because "make main memory appear larger" and "alleviate the burden of managing two levels of storage" are two sides of one coin - they are really just different ways of saying the same thing. You can't get the programmer out of the business of explicitly managing multiple levels of storage unless it looks to them like they have a large enough main memory that they don't need secondary storage. That's why they called it virtual memory, right? Why do you think they picked that name?
As to the relocation issues, again, you don't have to have virtual memory to do that, and many 1960's computers did so - e.g. the KA-10 model of the PDP-10, which gave each process a private address space starting at location 0, but had no support for virtual memory. Ditto for protection.
As to the size of modern memories, it's true that they are now so large you don't need virtual memory as much - perhaps if we'd had memories that large in the 1960's, we'd never have bothered with virtual memory, and stuck with the simpler memory management mechanisms that just provided relocation and protection. (Although we'd have probably wanted paging too, to make memory allocation easier.) Noel (talk) 05:02, 8 Dec 2004 (UTC)
PS: The difference between a paging/relocation/protection system with and without virtual memory, of course, would be that for the without case, you wouldn't need the ability to take a page fault - i.e. have an intruction execution stop because a memory reference wasn't able to complete, and be able to restart (or continue) that instruction later, after the memory was available. With enough memory, all of a process' memory would always be in main memory when it was running, and you'd never need to be able to take a page fault.
The difference between paging and segmentation with respect to virtual memory is simply how the code/data gets divided. Paging implies a constant size overlay while segmentation allows for varying sized overlays.
Yes and no. You're right that paging uses fixed size units, but there may be a lot more to segmentation than the size. (I say "may be" because not all systems with segmentation have more to them.)
However, in some (e.g. Multics, as well as a number of later systems which copied it, such as the IBM System/38, the Prime machines, etc), the segmentation was actually visible to the user processes, as part of the semantics of the memory model provided to processes. In other words, instead of a process just having a memory which looked like a single large vector of bytes (or words or whatever), it had more structure. This is different from paging, which doesn't change the model visible to the process. This can have important consequences.
And no, it wasn't a kludge (as in the 80286, say) - in Multics, at least, the segmentation was a very powerful mechanism that was used to provide a single-level model, in which there was no differentiation between "process memory" and "file system memory" - a process' active address space consisted only a list of segments (files) which were mapped into its potential address space (both code and data). And no, it's not the same as the mmap() model in later versions of Unix, etc, because inter-"file" pointers (both code and data) don't work if people are mapping files into semi-arbitrary places, at least not without a lot of extra instructions as overhead. Multics could do relocated inter-segment references as an addressing mode on most instructions. (See the "Multics" book by Organick if you want to know more about how this worked - different processes could map the same segment into different places in their address space and it all still worked.) Noel (talk) 05:02, 8 Dec 2004 (UTC)
I am getting this from my textbook and not making it up off the top of my head, nor am I copying text directly from the book. --Underdog 19:00, Dec 7, 2004 (UTC)
Understood. I hope my comments above were useful. Noel (talk) 05:02, 8 Dec 2004 (UTC)

[edit] Why virtual memory

I removed:

Today, however, this is not the primary reason virtual memory is used.

because i) it didn't say what the primary reason is now, and ii) I was unable to add said reason (AFAIK the reasons to use virtual memory remains as it always was - to avoid burdening the programmer with the details of multi-level storage management, and simplify programs). If someone would care to fill in what the reason is, we can add this back. Noel (talk) 20:11, 6 Dec 2004 (UTC)

As a result of the discussion above, I think I understand what you meant here (that the ratio of "desired real memory" to "actual real memory" is much closer to one now, so that making the computer's main memory look much larger than it is really is a lot less important now). Is that right? I will modify the article to say this; let me ponder how best to say it. Noel (talk) 05:12, 8 Dec 2004 (UTC)

[edit] History

I'm surprised not to see a history section so I added one, cobbled together from http://www.cne.gmu.edu/itcore/virtualmemory/vmhistory.html and http://www.economicexpert.com/a/Memory:page.htm - but I am not convinced it's definitive. Further updates welcome. (To the section on early personal computers, it's tempting to add the Bill Gates quote "640KB ought to be enough for anyone", but I believe it's apocryphal). joe 3 July 2005 21:54 (UTC)


Modified the last sentence to mention that Apple's System 7, which preceed Windows 3.1 by almost a year in terms or virtual memory. Hopefully, someone may write an OS X "example" to balance what has been written for Windows and Linux on this page already.


I've been told that the Burroughs B5000 had virtual memory using segmentation when it was released in 1961. It certainly had it early on but I'm not certain it was in the earlies systems. I'll try to find something definitive and modify the article when I get it. --JeffW 22:55, 14 February 2006 (UTC)

According to the history page at the Unisys web site (www.unisys.com) the B5000 was the first dual processor and virtual memory computer. --JeffW 23:02, 14 February 2006 (UTC)

[edit] Swapping to RAM disks?

I wonder whether the following statement from the article is correct: Systems with a large amount of RAM can create a virtual hard disk within the RAM itself. This does block some of the RAM from being available for other system tasks but it does considerably speed up access to the swap file itself.

I have never heard of putting the swap file on a RAM disk, and I don't think it makes sense. Wouldn't it be better if the memory used for the RAM disk were available as "plain" memory? What is the benefit of swapping from RAM to RAM? Of course a RAM disk may speed up programs which use temporary files a lot, instead of (virtual) memory. But that has nothing to do with paging and the swap file.

I've tried to wrap my brain around this statement once more. Maybe the author meant that relocating other frequently accessed files to a RAM disk can speed up access to a swap file which remains on a hard drive. But even if that is the case, it does not do a very good job of explaining this. IMHO this part hurts the understanding of virtual memory more than it helps.

See http://kerneltrap.org/node/3660?from=100&comments_per_page=50 - though for me it's still complete nonsense - I just can't understand why should I use part of my RAM for swap partition when I could use it normally, in the usual way. --Anthony Ivanoff 09:21, 10 August 2005 (UTC)

Just my one cent here... Maybe what is meant by this article is this : In 32 bit world, processes can only run in the lower 4GB portion of physical memory, but the processor and motherboard are able address much more (e.g. 16GB). Although it is impossible to convince the kernel to allocate process memory in the physical space above 4GB, it is still possible to use it as a ram disk where to put the swapfile. I agree it makes more sense to modify the kernel, but what if you can't do that (i.e. you're using Windows) ? --Stephan Leclercq 06:51, 11 August 2005 (UTC)

Bit of speculation: Apart from the issue with the 4GB limit described above, in most cases running a ram disk to swap to takes away space from primary store and incurs extra overhead to manage the ram disk. It's a net loss. However, that assumes we're talking about ram internal to a single host. In a networked storage environment with fast connections the economics could be different. Up til now the interfaces would be a bottleneck, but dual 4GB/Sec full duplex fiber channel or Infiniband data can be pulled off a san very quickly. A hardware ram disk provides it's own management resouces, so the host doesn't get stuck with the extra work. If several hosts page to dymically sized paging files on a shared ram disk the paging space allocation among the hosts could vary as needed and the usual drawbacks of paging to a file probably would not apply. This is pricey gear. Whether such a configuration would be worthwhile would depend on what actuall hardware costs turn out to be and whether the virtual address space requirements on the hosts vary enough to merit the investment. Could be a neat solution, but only if you have the right problem. -ef

[edit] Why is it considered wise to have double the amount of actual RAM for a swapfile/swap partition?

I keep hearing that it's best to have a swap partition (or pre-allocated swap file, whatever you choose) be twice the size of the amount of actual RAM in your computer. Why twice that amount? Especially if you have 1.5-2GB of RAM and are not the type of person who will ever have several dozen memory-hog applications running at once. --I am not good at running 22:49, 17 September 2005 (UTC)

It's just a rule of thumb. If you have 1.5-2GB of RAM you probably have 150-200GB of hard drive. Sparing 1% of this for swap isn't too much, is it? It effectively doubles your memory. Most people would never need 4GB of memory (2 phys and 2 swap)...but the users that do need 2GB of phys likely want some breathing room beyond that 2GB. Justforasecond 06:07, 5 March 2006 (UTC)

[edit] Diagram terminology

The terms on the diagram should probably be updated:-

Virtual Memory -> Virtual Address Space
Physical Memory -> Primary Storage
Hard Disk -> Secondary Storage

In fact, in theory, tertiary storage could be used in place of the hard disk. Although obviously this would be A Bad Idea, it may be worth clarifying the distinction between primary storage, and anything other than primary storage.Guinness 17:02, 26 November 2005 (UTC)

[edit] Paging file fragmentation discusion needs review

I'll yeild to those with more expertise on the theoretical issues. Personally the point I find most important, ie, that virtual memory refers to a logical address space that is indepenant of the physical memory architecture was made well.

Incidentally, thanks for this comment. It's nice to know one's efforts are appreciated :) Guinness 00:58, 25 February 2006 (UTC)

I have questions about the later sections about "myths" about the windows paging file and the subsequent section on virtual memory in linux.

I have personally encountered strange "apparent memory problems" that were only remedied by setting a static paging file size and defraging the disk. I found the solution in an O'Rielly book, and while I realize that O'Rielly is not infalible, I suspect they vet their material reasonablely well. Since then I have had friends who've had the similar problems and before recommending the same fix I've surfed the web for updates and found numerous reports of users having similar problems and fixing them the same way. And when i suggested trying the same fix, it worked. I also seem to recall (though I could be mistaken) that Microsoft actually announced a fix for this problem at one point, but apparently it didn't work. (Please correct if this is wrong.) It could be that fragmentation per se is not the cause of the symptoms, but empirically the symptoms and the cure seem consistent with that explanation. I suppose this could be coincidental, but if so I'd love an explanation. Incidentally, this is the first suggestion I've seen anywhere after a lot of looking that this problem is not real.

I find the rational for this not being a problem to be weak. To begin with, the author overlooks the fact that constantly resizing the file incurs overhead. While I apprecialte that in a multitasking environment the disk will tend to seek around a lot and a bit of fragmentation won't make much practical difference, if the paging file gets fragmented badly enough eventually it could. Unfortunately, I don't have great expertise on NTFS internals so can't comment on how much it is affected by fragmentation. If the paging file is on a FAT file system, fragmentation problems seem likely. In a desktop computing environment where the user can boot up and run a calculator desk accessory then launch an office suilte and a graphics program all at once then quit everything and just read email for a while, demand for memory can change significantly and often. If the paging file is being raidcally resixed often and the disk is crowded, problem fragmentation seems likely.

User programs can vary in the way they access memory. Typically, programs access only a small section of their allocated memory at a time and the boundaries of that area tend to change gradually. However, there are exceptions and those exceptions tend to be less gracefull about paging. So sizing a static paging file requires some knowledge of the programs that will be running. An OS vendor cannot know that in advance. I suspect Windows defaults to a dyamic paging file so that any application the user runs will run reasonably well right out of the box. Users who understand the requirements of their software are in a position to size static paging files.

The next section on Linux virtual memory raises more questions. This section says that Linux is uaually configured to page to a raw partition to avoid file fragmentation problems. If fragmented paging files are not a problem for Windows why would they be a problem for Linux? Generally UNIX style file systems do not rely on contiguous block allocaton, so one would expect this to be less of a problem for Linux than for Windows. I suspect the key advantages of paging to raw disk are that the overhead required to manage the filesystem layer is eliminated and that the space cannot be encroached upon by other files.

I don't know if I'm just not getting it, or if there are some errors, or if more explanation is called for. But as it stands, whet's there seems ito be either inconsistant, unclear, or both. -ef

This whole argument about page file fragmentation being a performance hit falls apart when you consider that the Windows file very rarely changes size, because you have to fill all available physical memory *and* all available pagefile space (which in a default Windows XP configuration is another 150% of your total physical memory). You get a balloon message when the page file is being resized, and at no other time. If you aren't seeing that balloon, your page file isn't resizing. If you *are* seeing that balloon, your real problem is that you don't have enough physical memory to do what you want to do with the machine -- in which case, a fragmented page file the least of your performance problems. Warrens 06:56, 5 March 2006 (UTC)
So sizing a static paging file requires some knowledge of the programs that will be running.
For 99.99% of home and corporate users simple rules of thumb about static pagesize work fine. Justforasecond 07:03, 5 March 2006 (UTC)
That's a dangerous assumption to make, and, quite frankly, wrong. Think this through a little more. A static page file and a resizable page file work exactly the same almost all the time... but in those rare cases where additional memory is needed, being able to expand is very useful.... perhaps it's a Windows XP machine with 128MB memory and the user is forgetful about closing applications? Perhaps it's a long-running game of Civilization 4 or some other game that eats memory like it's going out of style? Understand that many people who use Windows don't even know what memory is, much less understand virtual memory or reasonable limitations. A resized page file is not a *significant* performance hit, and is certainly preferred to having your application, or Windows itself, crash because the OS can't complete a memory allocation request successfully. Having a resizable page file doesn't hurt this mythical 99.99% group of people you have spoken for, and in feasible (but rare) cases, could save them data loss.
If you're still not convinced, try this yourself:
  • From a freshly-booted system, measure out how long it takes to do a few operations that make use of the HDD. Use the performance monitor to see how much I/O activity is taking place, how much of the page file is being used, etc.
  • Create a page file that's very small but has a lot of expansion space. Reboot.
  • Use lots and lots of memory. Load every game, application, tool, media player, and document you've got handy. Again, use performance monitor to watch I/O and page file activity.
  • Watch for the balloon message indicating that virtual memory has expanded. Keep piling it on.
  • Watch the page file expand as you continue to use more memory.
  • You should now have, in theory, a fragmented page file, right? Reboot.
  • From your freshly-booted system, measure out how long it takes to do the same few operations as in the first step. Again, use the performance monitor, look at I/O activity, page file usage, etc.
What you're going to find is that there is no difference that can be attributed to anything more than margin of error. Remember, fragmentation is only an issue when you read a file sequentially, and as such, the drive head needs to move a greater distance to find blocks of data; that's not how a page file is used during regular system operation, especially not in large quantities when you're not using all available physical memory. Warrens 07:32, 5 March 2006 (UTC)
I may not have been clear -- an adjustable pagefile size *will* help out in rare cases, but if you are going with statically sized, you really don't need to have much knowledge of a particular users programs. 512MB of RAM? Make a 1-2GB page file. Simple enough, right?
I do think the adjustable size helps in *rare* cases, but consider the cases you mentioned. Users neglecting to close apps? Even if you had 10 open apps, it's unlikely they'll each require 150MB of memory. Civ 4? I'm not familiar with the its memory model, but most memory-hogging software that pretends to be reliable will attempt to use a combination of disk and memory on its own -- NOT soleley the built-in VM system. Justforasecond 16:46, 5 March 2006 (UTC)
I want to second the vote for re-working or even removal of the "myth" section. The windows/linux paragraphs clearly contradict each other, and no matter what the "truth" is, it hardly seems like encyclopedia-level discussion.
As for the points made so far, none of what has been said about windows paging in the article or discussion makes any sense. Swapfile fragmentation most certainly DOES matter. For one thing, Windows pages out unused data in the background to free up RAM to use for buffering and to make room for allocations which have not occurred yet. Discontiguous swapspace will cause the drive head to move farther and keep related I/O systems busy. Furthermore, if when pages are swapped out to make room for swapping in data from an idle app (alt-tab), there are two paging operations going on, and the time required will be directly proportional to the distance the drive head has to travel to complete all its work. These paging operations involve relatively small amounts of data, so drive head latency dominates the equation. Also, individual processes are using 50-150MB each on XP these days, so swapping is an issue even on machines with 1G of physical RAM.
I'll follow up with edits or suggestions when I'm logged in and have a real keyboard. I,m on my Zaurus at the moment. :) -- Crag 66.213.200.181 22:47, 3 June 2006 (UTC)

I also agree that the misconceptions section needs some work. It appears to be a debate, and there is not enough evidence for the explanations given. Since Wikipedia isn't a debate ground, I think it should be clear that the views expressed are by *some* people. It has been my experience that windows automatically resizes the page file to be larger regardless of the maximum size. I also agree that if windows is using 2-3x physical memory, that the biggest issue is not page file fragmentation, however, that argument is irrelevant to the page file discussion. A defragged page file IS faster than a non defragged page file, and regardless of the performance increase, this should be noted for accuracy.

I agree this isn't the place for debate. But there are a lot of myths out there. I used to set my page file as static. But I realised there is no need. What you should do, is set the minimum size to maximum you're ever likely to need, perhaps the same as you would set a static. Then set the maximum to something larger then that. As someone else pointed out, Windows tells you when it's increasing the size. With my config, this very rarely happens, but when it does, it's probably good that it does. Fragmentation doesn't matter much, since this should never happen except in emergencies. When I restart, the page file will go back to the minimum size and will not be fragmented (well unless it already was). All that really needs to happen is that you set the page file to a level which it rarely increases. A static size isn't necessary Nil Einne 16:34, 9 January 2007 (UTC)

[edit] macintosh system 7 and win 3.1 virtual memory

does anyone know the details of the mac system 7 or windows 3.1 vm systems?

The mac VM system seemed to be pretty immature when I used it. I don't think there was any address-space protection. it had a bit of paging (you could extend your 8MB of RAM to 10MB and make your progs run dog slow) and must have had some OS hooks to manage this, but it was easily crashable, which is one indicator of lack of a fully-implemented vm system.

win 3.1 seemed to be mostly a glorified UI on top of DOS. did it have any vm system at all? maybe a swap file?

Justforasecond 06:12, 5 March 2006 (UTC)

[edit] is the vm debate really settled?

many computers (though not the PCs and Macs we're sitting at) do not use virtual memory systems. it slows things down, makes performance unpredictable, consumes memory, and adds additional points of failure. your anti-lock braking computer, for instance, probably does not implement a virtual memory setup.

future compilers and OSs could be sophisticated enough to obviate some of the need for virtual memory systems. address-space protection becomes unimportant if you can trust that code won't try to chase memory that it doesn't own.

Justforasecond 06:20, 5 March 2006 (UTC)

I can't imagine the memory it consumes is more than a few percent of total memory and for a general purpose desktop or server the advantages are significant. Apps don't need to worry about getting stuff out of memory the instant they have finished with it. Your right about single purpose realtime embedded systems though. VM would be more trouble than its worth there. Plugwash 16:39, 19 May 2006 (UTC)

[edit] Hi All

I dont like posting this here, but it seems that this is the only place where people knowns! I'am using 4GB of RAM on 32bit Windows XP. I use the /3GB switch who according to microsoft can allocate 3GB of RAM for user apps. I use fixed page file on separete partition with 5GB size. I talk about rendering process with very big resolution. Even with those settings my PC is running out of virtual memory. Next thing that I'll do is to get page file partition bigger but I was thinking about getting the best out of it. So I thouth about the format of that partition. It will be for sure NTFS, but I was thinking about clusters. What cluster size will be great for the translation? I gess that because of the 32bit addressing of RAM, the default 4K cluster is great but I'am not sure at all. Please, someone who knows - give me a hand about this. Many thanks. —This unsigned comment was added by Hepo (talk • contribs) .

First of all, a separate partition for a Windows pagefile is going to be detrimental to performance, and creates an artificial limitation where none is needed. You're forcing the drive heads to move further and do more work.
Second, your best bet is probably to move to 64-bit Windows. Generally speaking, there is a 4GB limit on pagefile size per partition on 32-bit versions of Windows, though you can use a method documented in MSKB237740 to put multiple page files on a single partition. You will need 64-bit Windows if you want to create larger pagefiles. Your rendering application may have a 64-bit version available as well, which will allow the application to use much more than 3GB of memory, physical or otherwise. If upgrading to the appropriate hardware isn't financially feasible, get another *fast* HDD (10k RPM SATA or 15k RPM SCSI) and put an additional pagefile on that drive. Windows will split pagefile activity between drives to derive the best performance, so this is a much better solution than having multiple pagefiles on a single drive.
Third, cluster size is basically meaningless in the context of pagefile access. Windows uses the space allocated for the pagefile in special ways to squeeze out the best performance, and changing the cluster size isn't going to help. Warrens 04:39, 1 April 2006 (UTC)

[edit] Thanks Warrens

Again thanks for the replay. There is a lot of factors that I didn't mentioned about my situation. You are right about the 4G maximum limitation of 32bit Windows (I overstate that). I have 64bit Win with my new workstations, and the thing is that those 32bit are with OM Windows version (with other words there wont be upgrade for them), and I still want them to serve me as well as the new ones. About the seperate partition for swaping, Micro$oft says that is best for performance because of the bussiness of the system drive, and my swap drive is next to the system one. I store my "ready-data" on file servers so the hard drive serve only to Windows. In case that cluster size is meaningless I gess that there is nothing else that can be done - I have to go to a upgrade. Thanks for the multiple page files article, I wasn't aware about that(maybe some day in desperate need I'll try it :)). Thank you so much again - There must be more people like you on this world. Best Regards. —This unsigned comment was added by Hepo (talk • contribs) .

Hey Hepo -- you might want to look into your apps and make sure you don't have a memory leak. If the apps keep using more and more memory for no obvious reasons, or if they don't seem to use less memory when they aren't being used much you might have a prob. Justforasecond 01:35, 2 April 2006 (UTC)

I will. In fact this is a new version with who those problems come up. The thing is that this peace of software swaps everything that cat be swaped - it desperately wants 64bit OS I guess. Thaks again. I'am thinking now about system managed page file - it seems that fixed may be a problem as well. Wish you Greats.

[edit] Contradictory lead and sections (Split Suggested)

The term virtual memory is often confused with memory swapping, probably due in part to the Microsoft Windows family of operating systems referring to the enabling/disabling of memory swapping as "virtual memory"[citation needed]. From Windows 95 onwards, all Windows OS versions use only paging files. In fact, Windows uses paged memory and virtual memory addressing, even if the so called "virtual memory" is disabled.

I agree that that "virtual memory" != "swapping". Yet, later, we have specific implementations of "memory swapping" in popular operating systems. I cleaned them up a bit, but these sections strike me as completely useless (I removed a whole hell of a lot of "to change your swapfile, do this" already).

If nobody objects soon, I'm going to just flat out remove this and remove specific references throughout the entire article to swapping functionality. Thanks, Windows. --JStalk 20:17, 25 August 2006 (UTC)

I heartily agree with you, references to memory swapping should be removed from this article entirely and moved into a separate article ("memory swapping" is currently just a redirect to virtual memory, which is just plain wrong). I made a brief attempt to clarify it a while back when I re-wrote the introduction, but it needs some extensive work to separate into two articles, and thus far I have been too lazy to do this myself. Guinness 16:09, 28 August 2006 (UTC)
Jed, I'm going to revert your entire contribution, as you introduced some brutally bad factual errors, while also removing factually correct and relevant information. Swapping is a -completely- incorrect term to use w/r/t Windows NT in any form. If you don't know that, frankly, you shouldn't be writing about virtual memory on Microsoft Windows. -/- Warren 16:31, 28 August 2006 (UTC)
Christ, pal, easy. I attempted to simply distill the information that was already there into a more acceptable format. I won't claim to be an expert on the matter, but what makes you say swapping is an incorrect term? As I was taught way back when in Nerdery 101, swapping was the process by which pages that were no longer used were flushed to disk and the physical memory freed. Am I wrong?
I strived to work with the "facts" (or not) that were already on the page, not add any information. About the only information I see on your revert that I added is the bit about moving or deleting the swapfile. I stand by my edit. Perhaps the only line I may disagree with in hindsight is:
The Windows platform implements virtual memory as a hidden "swap file".
"Virtual memory" there was a bad choice of words, I agree. How about we go through on a case-by-case basis and you tell me what factual errors I introduced from content already on the page before slashing at me with a reproachful attitude. It's detestable, how you come off -- please don't bite the newcomers, indeed.
If I introduced factual errors, I apologize, that was not my intention. My intention was to remove the unwelcome content on the page. --JStalk 02:17, 29 August 2006 (UTC)
Swapping, as applied to Windows (I consider Peter Norton an authority on anything computing -- his long and diverse contributions to programming as a whole evidence this.) So how is swapping an incorrect term to use with reference to Windows? --JStalk 02:25, 29 August 2006 (UTC)
I too am against the line The Windows platform implements virtual memory as a hidden "swap file". It can be rephrased as To provide the larger address space, Windows uses a hidden "swap file", or Windows uses a hidden "swap file" to act as an extension to the physical RAM, with some detailing on what and how the extension works. --soumসৌমোyasch 07:28, 30 August 2006 (UTC)
That Peter Norton article is from 2000. He was almost assuredly writing about Windows 9x at that point, because very few people outside from businesses were using NT 4 or 2000 back then. Yes, the term "swap file" is appropriate for Windows 9x, but it is not for NT-based operating systems. You can try doing a Google search on "page file site:microsoft.com" and compare it with "swap file site:microsoft.com" to see a pretty clear delineation between what OSs the terms are used with. With that said, back in the 1990s it was common for people to call NT4's paging file a "swap file", even amongst Microsoft employees, because Windows 3.x & 9x was far more popular at the time, and the term "swap file" had a lot more traction.
Anyways, if you're looking for an authoritative source on accurate, technical information about Windows NT and its descendants, Norton isn't your man. These days he's a book author first, and a technologist second. Instead, pick up the book "Microsoft Windows Internals" by Mark Russinovich and David Solomon; it's a fantastic, well-written book and it digs deeper into the real guts of Windows better than anything else out there. Russinovich is well known for his Sysinternals line of tools, which you may have heard of, and he was recently hired by Microsoft to work on the Windows kernel... so yeah, he knows his stuff. Of interest to this discussion is Chapter 7 which covers memory management in eye-watering detail. It's the single largest chapter of the book at 110 pages! While this article isn't really the place to go into similar levels of detail, it's quite clear that "swap" is not part of the modern nomenclature, and our summation of Windows' virtual memory system needs to reflect that accurately. -/- Warren 11:45, 30 August 2006 (UTC)
You chose to completely ignore your allegation that I introduced factual errors, instead slamming Peter Norton. I am beginning to notice that you are acting uncivil and in bad faith.
My response to you is not appropriate for this talk page any longer, and I will post the completed response on your talk page. --JStalk 22:53, 30 August 2006 (UTC)
It's not worth your time to get offended that I'm pointing out a source of information on the subject that is far more qualified on the subject than Peter Norton is. Don't take it personally, it's the truth... accept it and move on. Now do you really need me to describe your contribution in depth to point out the glaring factual errors? Okay, let's do that, but you really aren't going to like it:
Sentence 1:The Windows platform implements virtual memory as a hidden "swap file" on disk.
No it doesn't. Virtual memory is implemented as described in the rest of the article; the CPU and OS share responsibilities for presenting a contiguous address space to applications. That address space can be backed by a page file or a swap file, but that is only a part of the bigger picture.
Sentence 2:Through the versions of Windows, this file has moved and been renamed several times.
Twice -- and it depends on how you want to count it. It is called 386SPART.PAR in Windows 3.x and WIN386.SWP in Windows 9x. The paging file in NT has always been called PAGEFILE.SYS; it's never been renamed in that line of operating systems. The text you deleted made this point fairly clearly.
Sentence 3:Moving it or deleting it while the system is running (or sometimes even outside of the system) is often a cause for drastic error.
It is actually impossible to remove or change the location of the page file while Windows is using it. If the file is deleted while the OS isn't running, a new file will be generated next time the OS boots; the only circumstance in which the OS will fail at this point is if it is unable to create that new paging file (full HDD, e.g.).
Sentence 4:(Windows XP, however, will regenerate the swap file at boot should it be deleted while Windows is not running.)
Correct, but previous versions of Windows do this too... and it's still not called a swap file.
Sentence 5:In Windows XP, virtual memory was improved by allowing page files to reside on multiple drives.
Ignoring this inaccuracy that virtual memory is the page file, this is not a feature new to Windows XP. Multiple page files were possible in NT 4.0; possibly 3.51 too, but my reference manuals on that version are packed in a box right now so I can't check easily.
Okay? Are we clear on all that? If you still want to go on finger-pointing and claiming "incivility" and "bad faith" instead of simply accepting that the article had it right and you had it wrong, that's your choice, but it's not a good use of your time. Instead, go track down that book I mentioned earlier and get reading. I wouldn't be taking the time to explain this if I wasn't absolutely certain that the weight of evidence out there didn't support it. -/- Warren 23:43, 30 August 2006 (UTC)
Since you decided to leave it here, I'll respond here. I am offended with you because you made a personal attack at me about my level of knowledge and familiarity with a specific piece of subject matter, due to the wording I used. Your defense of that is also completely incorrect, as I will prove in the following essay.
Let us, for the purposes of this essay, say that you own a Ford car. One day, you decide you would like a Toyota instead, because you have heard great things about Toyota automobiles. So, you take out your tools, go outside, and remove all Ford logos from your car. You then spraypaint "Toyota" all over the car and add Toyota logos to replace the Ford ones. Being proud of your accomplishment, you begin to tell your friends that your car is a Toyota.
Needless to say, your friends are going to look at you like you are an idiot.
Your automobile is still a Ford. It drives like a Ford, it looks like a Ford, and it is most certainly registered with your motor vehicle bureau as a Ford, regardless of what you may think it is.
This same situation is playing out with the Windows swap file. With the release of one of the Windows versions, the developers renamed the swap file from win386.swp to pagefile.sys. This has caused many Microsoft people, notably a MS-MVP named Alex Nichol[1] and scores of people that read Microsoft documentation, to call the swapping file the "page file". You can call that Ford a Toyota, that does not imply that it is no longer a Ford.
Now, that is all well and good. I would not care so much what the file is called if it did not bleed into technical discussions about the underlying mechanism.
For some reason, you have thrown quite a big accusation at me, that of "introducing factual error" when you reverted my edit to Virtual memory. In defense of your accusation and revert, you stated that:

Swapping is a -completely- incorrect term to use w/r/t Windows NT in any form. If you don't know that, frankly, you shouldn't be writing about virtual memory on Microsoft Windows.

Let us begin unraveling your claim with an introduction to both paging and swapping. I am tired of people that have not written a line of working software telling me what I know. I just cannot keep civil with you, and I apologize for that.
Paging
Paging is a feature of modern processors, the IA-32 series included, that allow the linear address space of the processor to be mapped to a series of 'pages' (most commonly 4,096 bytes in size, but there are a few options) at the operating system's discretion. Through a complex translation, virtual addresses referring to these pages are translated to physical addresses in memory using a variety of mechanisms before being sent out on the address line. This is the basis of "Virtual Memory", also known as "Paging".
In short, addresses specified by applications (called logical addresses) are translated in hardware to an actual, physical address (called absolute addresses). This allows pages containing code and data to be moved around with no impact on applications; to the application program, paging is completely transparent. On the operating system side, paging is implemented via a series of page tables in memory that the OS sets up. Access restrictions can also be implemented in hardware at OS discretion. From the Intel Architecture Software Developer's Manual, chapter 3, section 6:

When paging is used, the processor divides the linear address space into fixed-size pages (of 4 KBytes, 2 MBytes, or 4 MBytes in length) that can be mapped into physical memory and/or disk storage. When a program (or task) references a logical address in memory, the processor translates the address into a linear address and then uses its paging mechanism to translate the linear address into a corresponding physical address.

If the page containing the linear address is not currently in physical memory, the processor generates a page-fault exception (#PF). The exception handler for the page-fault exception typically directs the operating system or executive to load the page from disk storage into physical memory (perhaps writing a different page from physical memory out to disk in the process). When the page has been loaded in physical memory, a return from the exception handler causes the instruction that generated the exception to be restarted.

But wait a minute! There's disk storage in there! That means it is time for our next section...
Swapping
Swapping is the process used by some operating systems to flush unused pages to disk to make room for others. After the page is saved to disk (called "swapping out"), the page is marked as "not present" in the operating system's or program's page table (described above). To the processor, this is not a concern, because it is not using the data in that page at all.
When that page is eventually used, however, here is the short progression of steps that happens: 1. The instruction in question references the virtual address of a page that has been swapped out. 2. The processor freezes the instruction and begins translating the virtual address. 3. The processor determines that the virtual address maps to a page (via the page table) that has been flushed to disk and is not present. 4. The processor generates a #PF (page fault), which is a cue for the operating system to "swap in" the page (load it from disk). 5. The operating system's #PF handler loads the page from disk into physical memory -- possibly in a different location -- and updates the page table. If the #PF handler was invoked with an invalid address, this is where operating systems generate an "Invalid Page Fault" error (Windows' STOP). 6. The #PF handler returns, indicating to the processor that everything is fixed, and the processor reevalutes the task's current instruction that was frozen.
This process is completely transparent to application developers. It is called swapping, always has been (long before Windows existed), and always will be.
The Nomenclature
You said swapping is an incorrect term to use for Windows NT, because, quote:

[...]it's quite clear that "swap" is not part of the modern nomenclature, and our summation of Windows' virtual memory system needs to reflect that accurately.

"Page file" is a Microsoftism that they seem to have adopted. Swapping is implementable in operating systems without using the paging mechanism of the processor (it just requires more work). Tying swapping to paging is a mistake on Microsoft's part, as they are two independent processes. It is a Microsoftism.
The Microsoft Windows "page file" is a swap file. And you can give me the riff-a-roo about introducing "factual errors" because I prefer to stay with the computer science term, and I'll respond just as I am now.
How can a term describing a process (potentially) completely independent of paging not be part of the modern nomenclature? Microsoft calls shared libraries "Dynamic Link Libraries", that does not mean they are not shared libraries. I feel in an encyclopedia struggling to stay unbiased one way or the other, letting a Microsoftism such as "page file" slip into any writing on Wikipedia is an admission that we accept said Microsoftism. I don't care if we're talking about Windows or turkey basters; Microsoftisms are not Wikipediaisms, under any circumstances. For those reasons, I feel Swapping, Thrash (computer science), Mapping, Virtual memory, and Memory management need attention on this issue, just to name a few.
I should be writing about virtual memory, regardless of what you think, Warrens. Because of Microsoftisms like that, articles like Virtual memory are turning horrible. --JStalk 00:30, 31 August 2006 (UTC)
Oh, and, you can release the kernel locks on the swap file programmatically if you know the API to touch and are in the right Windows subsystem, you will just crash your machine -- I'd proof of concept it, but I'm not in the mood and it would require me to dust off my C expertise.
Although I agree the bit about moving the file while the OS was running was a bit much. Terms like "impossible" are a bit strong, though. --JStalk 00:35, 31 August 2006 (UTC)

First of all, car analogies have no place in a discussion about virtual memory. Let's stay focused. It wastes your time writing it; it wastes my time reading it and trying to understand what the heck you're trying to say.

Second, the fact that you are linking to an article which more or less perfectly restates what I've already said -- and what the article itself has said for a very long time -- makes me wonder why you're arguing this so much. Is it because you don't like being told you're wrong? You blew away factually correct information in favour of factually incorrect information, and you got called on it; believe me, I can understand why you'd be pissed off, but don't take it personally... consider it an opportunity to correct false presumptions and to learn something.

Third, regarding this:

I feel in an encyclopedia struggling to stay unbiased one way or the other, letting a Microsoftism such as "page file" slip into any writing on Wikipedia is an admission that we accept said Microsoftism.

You really, really need to read Wikipedia:Neutral point of view. Slowly and carefully. Don't even bother contributing to Wikipedia again until you've done this. I'll quote the second half of the very first sentence of Wikipedia's NPOV policy here, because it's relevant to the mistake you're making: "(Articles) must represent all significant views fairly and without bias." What this means is, you as an editor can't declare a term to be a "Microsoftism" and thus render their terminology invalid and not suitable for inclusion in an article. Microsoft is the #1 operating system vendor in the world; their Windows NT implementation of virtual memory and paging exists on over half a billion computers, and that number grows every day. Accordingly, what they name a technology carries a lot of weight. If Microsoft calls it a page file -- and they have that right, since it's their creation -- then we report it as a page file. End of discussion.

Wikipedia isn't here for you to espouse your opinion on how Microsoft got their naming wrong. Go start a blog or something if you want to do that. -/- Warren 12:12, 31 August 2006 (UTC)


As everyone appears to agree that this article needs to be split, and it still hasn't been done yet, I'm tagging it with {{split-apart}}. -- intgr 19:53, 25 November 2006 (UTC)

[edit] Virtual Memory's Real definition

Virtual Memory is method whereby the Operating System uses the hard drive as it though were RAM when OS is low on RAM. The data stored on the hard drive is called Swap Page or Page File.—The preceding unsigned comment was added by 192.234.16.2 (talkcontribs).

No it isn't. What you're referring to is "Memory Swapping" This is a common mistake resulting from Microsoft's referring to the enabling of Memory Swapping incorrectly as "Virtual Memory" The article's definition is correct. Windows uses virtual memory, even if the so called virtual memory is switched off; turning this off in fact turns off the memory swapping. Guinness 10:49, 11 October 2006 (UTC)
Pretty well answered. Here in my company we've setup a written test for applicants to computer related positions. This particular question, "What is virtual memory", was never answered correctly. People keep saying: "A technique to extend real memory", "The use of harddisk paging file as memory", etc. The wikipedia definition is the correct one, but it is a bit too long. There's a pretty neat definition i've stumbled on the net, which is even more concise and as accurate as the wikipedia one:
Addressable space that appears to be real storage. From virtual storage, instructions and data are mapped into real storage locations. The size of virtual storage is limited by the addressing scheme of the computer system and by the amount of auxiliary storage available, not by the actual number of system memory locations. Contrast with real memory. Synonymous with virtual storage.'
I love the idea that your company is asking the question about virtual memory. Too often many people have no clue or they do not care about it because we do not use everyday at our job. I would be a bit careful though about being too critical on the answer if this is the only way you are asking the question. An easy question to ask someone to see if they know the difference between the paging file and virtual memory is to ask them to determine how much of the paging file is being used. Or to ask is What value is displayed to you in the PF Usage field in the Task Manager. This may get better results for your questions without making it so open ended- you could also ask them to describe the difference between protected, virtual and real memory.

www.ncsa.uiuc.edu/UserInfo/Resources/Hardware/IBMp690/IBM/usr/share/man/info/en_US/a_doc_lib/aixuser/glossary/V.htm Loudenvier 12:50, 11 October 2006 (UTC) I would like you all to agree on some fundamentals, that more-less: virtual memory - a mechanism allowing OS to use other devices other than physical RAM as RAM too; swapping - a mechanism allowing OS to transfer data between virtual memory device and physical RAM; paging - a mechanism allowing OS to address both physical and virtual memory device data in a unified manner; pagefile/swapfile - (looks like it is the same, Windows users like pagefile, UNIX/Linux like swapfile as per historical naming conventions) - a filesystem representation of data stored in virtual memory - refers to devices with filesystems only e.g. HDD, removable flash drives (useless, but doable). Then once you agree on these fundamentals, you may start to discuss details (and maybe you stop indulge yourselves)... Maybe for some confused people, it would be nice to differentiate between RAM and storage or just point them into right directions...My 2 cents. BTW, the Windows section is kind of wrong. [Yellow01]

People who do know what they're talking about do agree on the terms (although your definitions are somewhat sketchy right now). It's the clueless people who keep equating virtual memory with swapping, and the article is currently fairly bad at making that difference clear (and keeps blurring the line). As long as nobody fixes, splits and clears up the article, we just have to tolerate and ignore these people. -- intgr 09:16, 28 November 2006 (UTC)
"virtual memory - a mechanism allowing OS to use other devices other than physical RAM as RAM too;" - Again this is wrong. Virtual memory or to give it its full title "Virtual Memory Addressing" is simply the techinique whereby non-contiguous memory blocks are presented to an application. It is entrely independant of the physical memory, whether that be volatile ram, magenetic disk, or hell, even punch cards could be addressed virtually. The fact that VMA is commonly used in conjunction with swapping is neither here nor there, they are both two distinct technologies, and they can both be utilised with or without the other. (Intgr - totally agree, I've been saying for months that I want to re-write them both, but haven't yet had the time; maybe I'll find time over christmas, unless someone beats me to it). Guinness 09:05, 18 December 2006 (UTC)

[edit] Joining Logical Address

I don't personally think Logical Address should be joined into Virtual Memory until we've fixed the Virtual Memory a bit. toresbe 09:15, 3 December 2006 (UTC)

I am new to editing, so here is my attempt at describing virtual memory Eric 20:03, 8 January 2007 (UTC)

The memory pages of the virtual address space seen by the process, may reside non-contiguously in primary, or even secondary storage.
The memory pages of the virtual address space seen by the process, may reside non-contiguously in primary, or even secondary storage.

Virtual memory or virtual memory addressing is an addressing scheme that requires implementations in both hardware and software.

The hardware must have two methods of addressing RAM, real and virtual. In real mode, the memory address register will contain the integer that addresses a word or byte of RAM. The memory is addressed sequentially and by adding to the address register, the location of the memory being addressed moves forward by the number being added to the address register.

In Virtual mode, memory is divided into pages usually 4096 bytes long. These pages may reside in any available ram location that can be addressed in Virtual Mode. The high order bytes in the memory address register reference tables in RAM at specific locations low in memory and they are addressed using real addresses. The low order bytes in the address register are an offset of up to 4096 bytes into the page ultimately referenced by resolving all the table references of page locations.

The size of the tables is governed by the computer design and the size of RAM purchased by the user. All virtual addressing schemes require the page tables to start at a fixed location low in memory that can be addressed by a single byte and have a maximum length determined by the hardware design. In multi tasked systems with more than one user, the tables further down the chain of arrays will be duplicated for each user and can reside in any location that can be addressed by the real mode of addressing.

In a typical computer, the first table will be an array of addresses of the start of the next table and the first byte of the memory address register will be the index into the array. Depending on the design goal of the computer, each array entry can be any size the computer can address.

The number of tables and the size of the tables will vary by manufacturer, but the end goal is to take the high order bytes of the virtual address in the memory address register and resolve them an entry in the page table that points to either the location of the page in real memory or a flag to say the page is not available.

If a program references a memory location that is within a page not available, the computer will generate a page fault. The will pass control to the operating system at a place that can load the required page from auxiliary storage and turn on the flag to say the page is available. The hardware will then take the start location of the page, add in the offset of the low order bytes in the address register and access the memory location desired.

All the work required to access the correct memory address is invisible to the application addressing the memory. If the page is in memory, the hardware resolves the address. If a page fault is generated, software in the operating system resolves the problem and passes control back to the application trying to access the memory location.

This entire scheme provides two major features to the computer user:

1. Applications can use more real memory than is installed in the computer. At some point, if the application is using much more memory than is available in real mode, the number of page faults will degrade system performance. The actual maximum usable ratio of real to virtual memory will depend on the application and the order is uses to address memory. 2. The system can provide total memory isolation between users and applications: by maintaining separate page tables for each user; memory used by one user is invisible to memory used by other users, since they each have their own page tables. There is overhead with this technique, since page tables have to be loaded and saved every time there is a context switch to a different user.

--Eric 20:03, 8 January 2007 (UTC)

I think the current first paragraph is better than yours. You are correct in the detail, however this describes how virtual memory is implemented. The first para should stick to the definition and implementation detail should follow. Guinness 03:07, 14 January 2007 (UTC)

I am new to trying to edit anything here, so I went back and re-read the current first paragraph. I don't think it is an accurate definition: it defines it as a technique used by operating systems and does not talk about the hardware. Any definition needs to describe both the hardware and software implementations, since without a hardware implementation, virtual memory would be impossible to implement. In addition, it refers to “more commonly used in multitasking.” Although this is true, it has nothing to do with a virtual memory definition and is misleading to a novice.

Perhaps I should have started with the following definition: Virtual memory is an addressing scheme implemented in hardware and software that allows discontiguous memory to be addressed as if it is contiguous. The technique used by all current implementations provides two major capabilities to the system: 1. Memory can be addressed that does not currently reside in main memory and the hardware and operating system will load the required memory from auxiliary storage automatically without any knowledge of the program addressing the memory. 2. In multi tasking systems, total memory isolation can be provided to every task except the lowest level operating system --Sailorman2003 19:29, 29 January 2007 (UTC)

I do agree with and like this definition. -- intgr 07:44, 30 January 2007 (UTC)

The first paragraph of the background that I added needs a description of what happens when you add an integer to the memory address register in Virtual Mode. I am too tired to do it tonight and I am open to suggestions on the wording.--Eric 02:53, 5 February 2007 (UTC)

[edit] "Separate swap partition for Windows"

I removed this paragraph:

Also, though it is not very common for Windows users, it is possible to use a whole partition of a HDD for swapping, just like most of Linux users are used to do (see below). By using a separate swap partition, it can be guaranteed that the swap region is at the fastest location of the disk. On HDDs with moving heads, this is generally the center.

First of all, modern Windows operating systems don't use the word "swap"; any text that uses this term is immediately suspect. Second, the notion that the "center" of a disk is fastest is unsupportable by fact. Third, raw page file performance is, most of the time, not relevant, because there are usually multiple I/O requests going at the same time, resulting in the heads doing a lot of moving around. Page file access is almost never sequential in nature, when reading -or- writing, so any kind of performance benefits that can come from having it located at the fastest part of the drive is nullified. Fourth, when Windows Setup runs, it generally tries to put the page file in a place that's pretty fast anyhow; what usually ends up happening is it gets put near the operating system files, which is actually a pretty good way of reducing the sheer amount of drive-head movement in a heavy paging situation. Separate page file partitions tend to put the page file further away from the operating system files and user data, thus reducing performance. -/- Warren 13:36, 8 February 2007 (UTC)

dDeliberate non-sense and mis-information. The file win386.swp (win 9x/Me) doesn't behave in the way you describe, nevertheless it applies to pagefyle.sys (winNT/2000pro/XP). You also removed some sentences that previous to my recent edits belonged to the Linux section. Would you mean that this method is a non sense in Linux environments too?--Dr. Who 13:48, 8 February 2007 (UTC)
ok, listen, I left this article and restored as it was previous to my edits, I blanked my user page and i hope you will rest well, I was not planning to become a nightmare for the lots of American/British/Commonwealth arts/science/technology gurus that are here under many umbrella nicks, so I'm leaving. Dr. Who 14:15, 8 February 2007 (UTC)
"Second, the notion that the "center" of a disk is fastest is unsupportable by fact."
Although I cannot be bothered to spend time looking for reliable sources right now, this is a very common and accepted fact among people who deal with disk storage. If you have time to kill, you can refer any hard disk review for benchmarks for evidence. For example, refer to the minimum/maximum transfer rates diagram of this review: [2], and specifically the transfer rate/offset decay graph [3].
And I would really rather not start another jargon debate, but I cannot see how using "swap" in relation to Windows is wrong. The word doesn't magically change its meaning — swap is still swap, whether on Windows, Unix or $yourFavouriteOS. -- intgr 15:33, 8 February 2007 (UTC)
Disk performance is governed by the rotation speed, the seek speed, and the data width (assuming we are not talking about RAID devices). Since the disk is a rigid platter and all parts rotate at the same speed, the probability of the arm reaching a track just after the home position paqssed the head is the same no matter what track is being accessed. By the same logic, the average time required to reach the desired sector for any given track is the same due to the constant rotation. speed.--Eric 18:21, 21 February 2007 (UTC)
That's correct; I was forgetting that swap performance is generally dominated by disk seeks (though inner tracks are definitely faster for sequential reads). -- intgr 18:38, 21 February 2007 (UTC)
Angular velocitylinear velocity--Doktor Who 22:45, 22 February 2007 (UTC)
Velocity is not important here at all. The problem with swap is that primary storage is generally assumed to be random access memory, e.g., requests to any address are assumed to take constant time, so data is often scattered around near-randomly. However, random access order is the worst possible order for sequential access memory devices. On average, hard disks without command queuing will have to wait a little more than half a rotation on every seek. On a 7200RPM disk, this means that if your requests are shorter than ~500 kB (inner tracks) or ~330kB (outer tracks), the disk spends over 50% of time seeking. While operating systems most likely implement some kind of readahead to swap in more than a single page at a time, the request size is probably still short. Hence, the performance will be dominated by disk seeks and ultimately, the difference in raw throughput will be negligible. -- intgr 07:30, 23 February 2007 (UTC)

I'm responsible for the "On HDDs with moving heads, this is generally the center" wording, and it was misunderstood because I wrote it too quickly. I have replaced the restored wording about the beginning of the disk with "generally the center cylinders between the inner and outer edges of the disk (except for disks with fixed heads)" - that is, on a 100-cylinder disk, "center" means cylinder 50, not the innermost cylinder. It's a well-documented fact in system performance literature, going back over 30 years, that seek time almost always outweighs all other considerations for placement of files on disks, the only exceptions being when you can eliminate it entirely (e.g., fixed heads, non-volatile cache). Yes, on some drives there are different data densities between the innermost and outermost tracks, but seek time continues to overwhelm all other aspects, even that one. RossPatterson 23:46, 21 February 2007 (UTC)

It is certainly true that in a 100 cylinder disk, data on cylinder 50 guarantees that you can't seek beyond 50 cylinders and data in the middle third of the disk will most likely be accessed with the head crossing the least number of cylinders. My experience with disks is old, but the last time I dealt with them, the bulk of the seek time was in the acceleration/deceleration of the arm. Once the arm was at speed, the distance of the arm movement was a smaller proportion of the total seek time.
The primary factor governing paging performance is thrashing caused by the working set being too large in relation to the size of RAM. Second to that is the accuracy of the algorithm that selects the page to swap and third is the number of dirty pages that have to be written before they can be re-used.
The speed of the disk determines the efficiency of the single server queue that is usually the case on small computer systems. It is rare that a multi server paging queue is available. So, the faster the disk, the greater number of page faults that can be serviced without increasing the likelihood that the queue will grow too large; thus, enabling a larger working set for a given RAM size when a faster disk is used.--Eric 19:50, 22 February 2007 (UTC)

[edit] Recent "sizing virtual memory" section

I removed this section recently added by User:Jcea:

Historically, operative systems required to be configured to use not swap space at all, or at least the same space than RAM available. The reason was that main memory (RAM) was considered a cache for swap space. So, if you had 64 MB of RAM and no swap, your applications were limited to 64 MB, but if you configured 128 MB of swap, your applications could use 128 MB.
So, an usual rule of thumb was used: if you needed swap, use double swap space than RAM. This would double virtual memory usable for applications, while keeping thrashing under control.
Current operative systems provide virtual memory as the sum of physical RAM plus the swap space, so you can configure swap size less than system memory. Thrashing risk is also reduced because current paging algorithms are more clever, disks are faster and main memory is bigger.

Because I think it's factually wrong and it doesn't cite any sources. First, I have yet to hear of an operating system that would duplicate a significant proportion of its in-memory storage in the swap space; certainly not around the time when computers started reaching 64MB of main memory (there were some with a single-level store, however it has little to do with the concept of swapping). Second, it repeats the popular misconception that virtual memory is merely "RAM + swap space". Finally, stating that disks are getting faster and memories are getting larger is useless; what does matter from the performance aspect is the difference between their growths. Less swap space is being used since the performance of hard disks can't keep up with the performance of primary storage; page replacement algorithms are more critical than ever only for disk cache concerns. -- intgr 23:02, 8 March 2007 (UTC)